【问题标题】:Getting NaN Error While Using LSTM Autoencoder使用 LSTM 自动编码器时出现 NaN 错误
【发布时间】:2019-06-21 13:35:21
【问题描述】:

我正在尝试使用 Keras 使用 LSTM Autoencoder 训练模型来重建我提供给模型的输入,并且我在解码部分后获得的结果中出现 NaN 错误。这是我的代码;

    # lstm autoencoder recreate sequence
    from numpy import array
    import numpy as np
    from keras.models import Sequential
    from keras.layers import LSTM
    from keras.layers import Dense
    from keras.layers import RepeatVector
    from keras.layers import TimeDistributed
    from keras.utils import plot_model
    import pandas as pd

    df = pd.read_csv('flight_data.csv',sep=',',header=None)
    data = df.to_numpy()
    print(data.shape)


    # define input sequence
    sequence1 = array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
    sequence2 = array([0.2, 0.4, 0.6, 0.4, 1.0, 1.2, 1.4, 1.6, 1.8])
    # reshape input into [samples, timesteps, features]
    n_in = 100
    data = data[73666:,:]
    sequence = data.reshape((1,100,24))
    print(sequence)
    # define model
    model = Sequential()
    model.add(LSTM(100, activation='relu', input_shape=(n_in,24)))
    model.add(RepeatVector(n_in))
    model.add(LSTM(100, activation='relu', return_sequences=True))
    model.add(TimeDistributed(Dense(24)))
    model.compile(optimizer='adam', loss='mse')
    # fit model
    model.fit(sequence, sequence, epochs=300, verbose=0)
    plot_model(model, show_shapes=True, to_file='reconstruct_lstm_autoencoder.png')
    # demonstrate recreation
    yhat = model.predict(sequence, verbose=0)

    print(yhat)

我得到的输出是;

[[[9.46687355e+14 1.00000000e+01 4.42748822e+08 ... 0.00000000e+00
   0.00000000e+00 0.00000000e+00]
  [9.46687355e+14 1.00000000e+01 4.42748822e+08 ... 0.00000000e+00
   0.00000000e+00 0.00000000e+00]
  [9.46687355e+14 1.00000000e+01 4.42748823e+08 ... 0.00000000e+00
   0.00000000e+00 0.00000000e+00]
  ...
  [9.46687359e+14 1.00000000e+01 4.42748824e+08 ... 0.00000000e+00
   0.00000000e+00 0.00000000e+00]
  [9.46687359e+14 1.00000000e+01 4.42748824e+08 ... 0.00000000e+00
   0.00000000e+00 0.00000000e+00]
  [9.46687359e+14 1.00000000e+01 4.42748825e+08 ... 0.00000000e+00
   0.00000000e+00 0.00000000e+00]]]

[[[nan nan nan ... nan nan nan]
  [nan nan nan ... nan nan nan]
  [nan nan nan ... nan nan nan]
  ...
  [nan nan nan ... nan nan nan]
  [nan nan nan ... nan nan nan]
  [nan nan nan ... nan nan nan]]]

哪个部分可能导致问题?我该怎么办?

【问题讨论】:

    标签: python keras lstm nan autoencoder


    【解决方案1】:

    这看起来像你有爆炸梯度,这是 LSTM 倾向于创建的。剪裁渐变可以解决这个问题,尝试将 clipnorm 设置为 1。

    ADAM = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False, clipnorm=1.)
    model.compile(optimizer=ADAM, loss='mse')
    

    【讨论】:

    • 谢谢。在那之后,我得到了; [[[ 7.80484634e+12 9.02850780e+13 -6.35291671e+12 ... 3.75248482e+13 9.01600772e+12 -2.42150550e+13] ... [-3.17999043e+14 1.97286219e+1 14 ... 3.82576896e+14 1.33085736e+14 -4.91843985e+14]]] 似乎值无关紧要。你认为为什么会这样?会不会是参数的原因?
    • 它删除了 NaN?嗯,我认为这与您的实际数据有关。你有多少数据,它看起来如何?
    • 实际上,我在 csv 文件中有超过 70000 行和 24 个特征的数据,但我使用了其中的 100 个。但是,当我决定将其设为 10000 时,输出再次变为 NaN。
    • 嗯,也许可以尝试在优化器中设置 clipvalue=0.5。你的 24 个特征,它们看起来如何?它们的范围变化很​​大吗?
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2021-04-23
    • 2017-11-22
    • 2019-09-02
    • 2020-07-11
    • 1970-01-01
    • 2019-10-19
    • 2016-05-29
    相关资源
    最近更新 更多