【问题标题】:InvalidArgumentError training multivariate LSTM autoencoderInvalidArgumentError 训练多元 LSTM 自动编码器
【发布时间】:2021-09-02 23:06:17
【问题描述】:

我尝试使用此模型在不同的数据集中进行实验,它适用于单变量时间序列。但是,在尝试为多元时间序列执行此操作时遇到问题,我认为这是由于时间分布层造成的,但我不确定。我试图阅读关于同一个问题的不同帖子,但没有运气。

trainx 形状:(38100, 100, 4) |训练形状:(38100, 4)

testx 形状:(12230, 100, 4) |暴躁的形状:(12230, 4)

(样本、时间戳、特征)

型号如下:

def build_model(X):

'''
    Builds an autoencoder model.
    @params: X input array
    @return: autoencoder full model, encoder model part
'''

encoder_inputs = keras.layers.Input(shape=(X.shape[1], X.shape[2]), name='Input_Layer')
L1 = keras.layers.LSTM(64, return_sequences=True, name='Encoder_1')(encoder_inputs)
L2 = keras.layers.LSTM(32, return_sequences=True, name='Encoder_2')(L1)
code = keras.layers.LSTM(2, return_sequences=False, name='code_vector')(L2)
L3 = keras.layers.RepeatVector(X.shape[1], name='Repeat_Vector')(code)
L4 = keras.layers.LSTM(32, return_sequences=True, name='Decoder_1')(L3)
L5 = keras.layers.LSTM(64, return_sequences=True, name='Decoder_2')(L4)
decoder_outputs = keras.layers.TimeDistributed(keras.layers.Dense(X.shape[2]), name='Time_Distrubted')(L5)

encoder = keras.Model(inputs=encoder_inputs, outputs=code, name='Encoder')
autoencoder = keras.Model(inputs=encoder_inputs, outputs=decoder_outputs, name='Autoencoder')

return autoencoder, code

然后我构建模型并编译和拟合如下:

model, code = build_model(trainx)
model.compile('adam', loss='mae')

history = model.fit(x=trainx, y=trainy, epochs=100, validation_split=0.1, batch_size=32, callbacks=callbacks, shuffle=False)

我得到以下错误跟踪:

<ipython-input-246-e01fa31bc39d> in <module>
----> 1 history = model.fit(x=trainx, y=trainy, epochs=100, validation_split=0.1, batch_size=32, callbacks=callbacks, shuffle=False)

~\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1181                 _r=1):
   1182               callbacks.on_train_batch_begin(step)
-> 1183               tmp_logs = self.train_function(iterator)
   1184               if data_handler.should_sync:
   1185                 context.async_wait()

~\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
    887 
    888       with OptionalXlaContext(self._jit_compile):
--> 889         result = self._call(*args, **kwds)
    890 
    891       new_tracing_count = self.experimental_get_tracing_count()

~\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
    948         # Lifting succeeded, so variables are initialized and we can run the
    949         # stateless function.
--> 950         return self._stateless_fn(*args, **kwds)
    951     else:
    952       _, _, _, filtered_flat_args = \

~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs)
   3021       (graph_function,
   3022        filtered_flat_args) = self._maybe_define_function(args, kwargs)
-> 3023     return graph_function._call_flat(
   3024         filtered_flat_args, captured_inputs=graph_function.captured_inputs)  # pylint: disable=protected-access
   3025 

~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1958         and executing_eagerly):
   1959       # No tape is watching; skip to running the function.
-> 1960       return self._build_call_outputs(self._inference_function.call(
   1961           ctx, args, cancellation_manager=cancellation_manager))
   1962     forward_backward = self._select_forward_and_backward_functions(

~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager)
    589       with _InterpolateFunctionError(self):
    590         if cancellation_manager is None:
--> 591           outputs = execute.execute(
    592               str(self.signature.name),
    593               num_outputs=self._num_outputs,

~\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     57   try:
     58     ctx.ensure_initialized()
---> 59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
     60                                         inputs, attrs, num_outputs)
     61   except core._NotOkStatusException as e:

InvalidArgumentError:  Incompatible shapes: [32,100,4] vs. [32,4]
     [[node gradient_tape/mean_absolute_error/BroadcastGradientArgs (defined at <ipython-input-246-e01fa31bc39d>:1) ]] [Op:__inference_train_function_110609]

Function call stack:
train_function

正如我提到的,我认为这可能与时间分布层有关。但是,如果有帮助,模型可以在 batch_size=1 时运行。除此之外没有。

【问题讨论】:

  • 在 L5 中设置 return_sequences=False 并从 decoder_outputs 中移除 TimeDistributed
  • 谢谢!它现在正在工作。为了理解以这种方式修复它的直觉是什么,因为它在单变量时间序列中运行良好?
  • 网络输出必须与您的目标形状相匹配...如果您有 2D 目标,您的网络必须生成 2D 而不是 3D。只需设置 return_sequences=False 即可产生 2D 输出。这对每个问题都有效
  • 感谢您的解释。真的很感激。

标签: python tensorflow keras time-series autoencoder


【解决方案1】:

来自 cmets

网络输出必须与您的目标形状相匹配。如果您有2D 定位您的网络必须生成2D 而不是3D。简单地设置 return_sequences=False 产生 2D 输出。

def build_model(X):


'''
    Builds an autoencoder model.
    @params: X input array
    @return: autoencoder full model, encoder model part
'''

encoder_inputs = keras.layers.Input(shape=(X.shape[1], X.shape[2]), name='Input_Layer')
L1 = keras.layers.LSTM(64, return_sequences=True, name='Encoder_1')(encoder_inputs)
L2 = keras.layers.LSTM(32, return_sequences=True, name='Encoder_2')(L1)
code = keras.layers.LSTM(2, return_sequences=False, name='code_vector')(L2)
L3 = keras.layers.RepeatVector(X.shape[1], name='Repeat_Vector')(code)
L4 = keras.layers.LSTM(32, return_sequences=True, name='Decoder_1')(L3)
L5 = keras.layers.LSTM(64, name='Decoder_2')(L4)
decoder_outputs = keras.layers.Dense(X.shape[2], name='Time_Distrubted')(L5)

encoder = keras.Model(inputs=encoder_inputs, outputs=code, name='Encoder')
autoencoder = keras.Model(inputs=encoder_inputs, outputs=decoder_outputs, name='Autoencoder')

return autoencoder, code

model, code = build_model(trainx)
model.compile('adam', loss='mae')

history = model.fit(x=trainx, y=trainy, epochs=100, validation_split=0.1, batch_size=32, callbacks=callbacks, shuffle=False)

(转述自 Marco Cerliani)

【讨论】:

    猜你喜欢
    • 2021-01-29
    • 2021-05-10
    • 2021-04-03
    • 1970-01-01
    • 2019-12-24
    • 1970-01-01
    • 2017-11-22
    • 2018-02-16
    • 1970-01-01
    相关资源
    最近更新 更多