【问题标题】:indices = 2 is not in [0, 1)索引 = 2 不在 [0, 1) 中
【发布时间】:2019-01-11 05:58:41
【问题描述】:

我正在处理一个 seq2sql 项目,并且我成功地构建了一个模型,但是在训练时出现错误。我没有使用任何 Keras 嵌入层。

M=13 #Question Length
d=40 #Dimention of the LSTM
C=12 #number of table Columns 

batch_size=9
inputs1=Input(shape=(M,100),name='question_token')
Hq=Bidirectional(LSTM(d,return_sequences=True),name='QuestionENC')(inputs1) #this is HQ shape is (num_samples,13,80)

inputs2=Input(shape=(C,3,100),name='col_token')
col_lstm_layer=Bidirectional(LSTM(d,return_sequences=False),name='ColENC')

def hidd(te):
    t=tf.Variable(initial_value=1,dtype=tf.int32)

    for i in range(batch_size):  
        t=tf.assign(t,i)
        Z = tf.nn.embedding_lookup(te, t)
        print(col_lstm_layer(Z))
        h=tf.reshape(col_lstm_layer(Z),[1,C,d*2])
        if i==0:
#             cols_last_hidden=tf.Variable(initial_value=h)
            cols_last_hidden=tf.stack(h)#this is because it gives an error if we use tf.Variable here
        else:
            cols_last_hidden=tf.concat([cols_last_hidden,h],0)#shape of this one is (num_samples,num_col,80) 80 is last encoding of each column
    return cols_last_hidden

cols_last_hidden=Lambda(hidd)(inputs2)

Hq=Dense(d*2,name='QuestionLastEncode')(Hq)

I=tf.Variable(initial_value=1,dtype=tf.int32)
J=tf.Variable(initial_value=1,dtype=tf.int32)

K=1

def get_col_att(tensors):
    global K,all_col_attention
    if K:
        t=tf.Variable(initial_value=1,dtype=tf.int32)

        for i in range(batch_size):
            t=tf.assign(t,i)
            x = tf.nn.embedding_lookup(tensors[0], t)
    #         print("tensors[1]:",tensors[1])
            y = tf.nn.embedding_lookup(tensors[1], t)
    #         print("x shape",x.shape,"y shape",y.shape)
            y=tf.transpose(y)
#             print("x shape",x.shape,"y",y.shape)
            Ecol=tf.reshape(tf.transpose(tf.tensordot(x,y,axes=1)),[1,C,M])

            if i==0: 
#                 all_col_attention=tf.Variable(initial_value=Ecol,name=""+i)
                all_col_attention=tf.stack(Ecol)
            else:
                all_col_attention=tf.concat([all_col_attention,Ecol],0)

    K=0
    print("all_col_attention",all_col_attention)
    return all_col_attention

total_alpha_sel_lambda=Lambda(get_col_att,name="Alpha")([Hq,cols_last_hidden])   
total_alpha_sel=Dense(13,activation="softmax")(total_alpha_sel_lambda)
# print("Hq",Hq," total_alpha_sel_lambda shape",total_alpha_sel_lambda," total_alpha_sel shape",total_alpha_sel.shape)
def get_EQcol(tensors): 
    global K
    if K:
        t=tf.Variable(initial_value=1,dtype=tf.int32)
        global all_Eqcol

        for i in range(batch_size):
            t=tf.assign(t,i)
            x = tf.nn.embedding_lookup(tensors[0], t)
            y = tf.nn.embedding_lookup(tensors[1], t)
            Eqcol=tf.reshape(tf.tensordot(x,y,axes=1),[1,C,d*2])

            if i==0:
#                 all_Eqcol=tf.Variable(initial_value=Eqcol,name=""+i)
                all_Eqcol=tf.stack(Eqcol)
            else:
                all_Eqcol=tf.concat([all_Eqcol,Eqcol],0)

    K=0
    print("all_Eqcol",all_Eqcol)
    return all_Eqcol
K=1
EQcol=Lambda(get_EQcol,name='EQcol')([total_alpha_sel,Hq])#total_alpha_sel(12x13) Hq(13xd*2)
EQcol=Dropout(.2)(EQcol)

L1=Dense(d*2,name='L1')(cols_last_hidden)
L2=Dense(d*2,name='L2')(EQcol)
L1_plus_L2=Add()([L1,L2])
pre=Flatten()(L1_plus_L2)
Psel=Dense(12,activation="softmax")(pre)

model=Model(inputs=[inputs1,inputs2],outputs=Psel)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
model.summary()

earlyStopping=EarlyStopping(monitor='val_loss', patience=7, verbose=0, mode='auto')

history=model.fit([Equestion,Col_Embeddings],y_train,epochs=50,validation_split=.1,shuffle=False,callbacks=[earlyStopping],batch_size=batch_size)

Equestion、Col_Embeddings 和 y_train 的形状分别为 (10, 12, 3, 100) ,(10, 13, 100) 和 (10, 12)。

我搜索了这个错误,但在所有情况下,他们都错误地使用了嵌入层。即使我没有使用它,我也会在此处收到此错误。

indices = 2 is not in [0, 1)
[[{{node lambda_3/embedding_lookup_2}} = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@col_token_2"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_col_token_2_0_1, lambda_3/Assign_2, lambda_3/embedding_lookup_2/axis)]]

【问题讨论】:

    标签: tensorflow machine-learning keras lstm


    【解决方案1】:

    这里的问题是批量大小是在图形级别定义的。这里我使用 batch_size =9 来绘制图形,是的,我通过验证拆分 .1 得到批量大小为 9 进行训练10 的大小,但为了验证,我只留下了一个样本,因为 10*.1 是一个。

    所以1 的批量大小不能传递给图表,因为它需要9 的批量大小。这就是出现此错误的原因

    至于解决方案,我输入了batch_size=1,然后它工作正常,使用batch_size=1也得到了很好的准确性。

    希望这会对某人有所帮助。

    干杯..

    【讨论】:

      猜你喜欢
      • 2019-01-06
      • 2021-07-08
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-10-21
      • 1970-01-01
      • 2021-04-12
      • 1970-01-01
      相关资源
      最近更新 更多