【问题标题】:seq2seq - Inference model produces drastically different results than train model on the same validation setseq2seq - 推理模型在相同的验证集上产生与训练模型截然不同的结果
【发布时间】:2020-04-17 14:12:45
【问题描述】:

我正在处理时间序列 seq2seq 问题。对于我的方法,我使用 LSTM seq2seq RNN 和 Teacher Forcing。如您所知,为了完成任务,应该训练模型,然后使用经过训练的层,构建推理模型以解决任务(即共享层)。
这是我定义共享层的代码:

# Define the shared layers for the train and inference models
encoder_lstm = LSTM(latent_dim, return_state=True, name='encoder_lstm')
# Define the shared layers for the train and inference models
encoder_lstm = LSTM(latent_dim, return_state=True, name='encoder_lstm')
decoder_lstm = LSTM(latent_dim, return_sequences=True, 
                    return_state=True, name='decoder_lstm')
decoder_dense = Dense(decoder_output_dim, 
                      activation='linear', name='decoder_dense')
decoder_reshape = Reshape((decoder_output_dim, ), name='decoder_reshape')

接下来,我使用共享层定义训练模型。

# Define an input for the encoder
encoder_inputs = Input(shape=(Tx, encoder_input_dim), name='encoder_input')

# We discard output and keep the states only.
_, h, c = encoder_lstm(encoder_inputs)

# Define an input for the decoder
decoder_inputs = Input(shape=(Ty, decoder_input_dim), name='decoder_input')

# Obtain all the outputs from the decoder (return_sequences = True)
decoder_outputs, _, _  = decoder_lstm(decoder_inputs, initial_state=[h, c])

# Apply dense layer to each output
decoder_outputs = decoder_dense(decoder_outputs)

train_model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs)

公平地说,我正在使用自定义损失函数,它基本上是均方误差,但我屏蔽了某些条目。

def masked_mse(y_true, y_pred):
    return K.mean(
                  K.mean(((y_true[:,:,0] - y_pred[:,:,0])**2)*(1-y_true[:,:,1]),
                         axis=0),
                  axis=0)

经过几个 epoch 的训练,输出是这样的:

Train on 67397 samples, validate on 3389 samples
Epoch 1/10
67397/67397 [==============================] - 36s 536us/sample - loss: 0.1981 - val_loss: 0.0713
Epoch 2/10
67397/67397 [==============================] - 34s 499us/sample - loss: 0.0755 - val_loss: 0.0535
Epoch 3/10
67397/67397 [==============================] - 31s 456us/sample - loss: 0.0633 - val_loss: 0.0494
Epoch 4/10
67397/67397 [==============================] - 29s 429us/sample - loss: 0.0595 - val_loss: 0.0478

我们注意到验证集的损失在 0.045 左右。
现在,我创建了从上面的共享层派生的推理模型:

# Define an input for the encoder
encoder_inputs = Input(shape=(Tx, encoder_input_dim), name='encoder_input')

# We discard output and keep the states only.
_, h, c = encoder_lstm(encoder_inputs)

# Define an input for the decoder
decoder_input = Input(shape=(1, decoder_input_dim), name='decoder_input')
current_input = decoder_input

# Obtain the outputs for each of the Ty timesteps
decoder_outputs = []
for _ in range(Ty):
    # apply a single step of recurrence
    out, h, c = decoder_lstm(current_input, initial_state=[h, c])

    # pass the LSTM output through a dense layer
    out = decoder_dense(out)

    # The input in the next timestep (its shape is (?, 1, 1))
    current_input = out

    # reshape the decoder output as (?, 1) for convenience
    out = decoder_reshape(out)

    # append the output to the model's outputs
    decoder_outputs.append(out)

inference_model = Model(inputs=[encoder_inputs, decoder_input], outputs=decoder_outputs)

使用此推理模型,我尝试在我在训练期间使用的相同验证集上对其进行评估,以重新创建最后的结果:

# The input for the first timestep in the decoder is -1,
# (consistently, the same was applied during training)
decoder_input = -1 * np.ones((len(X_valid), 1, 1))

# Obtain the predictions, the resulting shape is (Ty, ?, 1)
y_pred = np.array(inference_model.predict([X_valid, decoder_input]))

# Reshape the output in the shape (?, Ty, 1)
y_pred = np.swapaxes(y_pred, axis1=0, axis2=1)

loss = masked_mse(K.constant(y_valid), K.constant(y_pred))
K.eval(loss)

评估损失的结果是0.1637。继续训练,它从未低于 0.14。

这很奇怪,因为我使用相同的验证集进行评估。我怀疑错误可能与推理模型的构建方式有关,但我不确定。
你有什么想法?

【问题讨论】:

    标签: machine-learning keras deep-learning neural-network lstm


    【解决方案1】:

    如果您的推理模型与经过训练的模型相比没有任何变化,则无需复制任何内容。您可以直接在现有模型上使用train_model.predict(...)

    如果您要执行多个训练阶段(例如在迁移学习中),复制层很重要,但不需要使用经过训练的模型进行推理。


    但是回到您的自定义循环,您的 LSTM 重复应该发生在应用 Dense 层之前。

    decoder_outputs = []
    for _ in range(Ty):
        out, h, c = decoder_lstm(current_input, initial_state=[h, c])
    
        # This line moved to before the decoder_dense call.
        current_input = out
    
        out = decoder_dense(out)
        out = decoder_reshape(out)
        decoder_outputs.append(out)
    

    【讨论】:

      猜你喜欢
      • 2021-11-02
      • 2017-12-25
      • 1970-01-01
      • 1970-01-01
      • 2021-08-03
      • 2017-09-10
      • 2018-05-01
      • 2020-12-28
      • 2019-03-29
      相关资源
      最近更新 更多