【发布时间】:2019-05-30 23:06:46
【问题描述】:
我对 Tensorflow/Keras 还很陌生,正在尝试建立一个 LSTM 模型。我已经成功地运行了我的代码,但是我的结果并没有给我有意义的结果。因此,我 - 作为测试 - 让我的 LSTM 网络学习我正在输入的功能之一。我知道 LSTM 和 relu 使用非线性关系,但是,我仍然希望输出与我试图学习的输入特征有点相似,但它根本不是。
我正在使用我在https://keras.io/getting-started/sequential-model-guide/ 上学到的修改版本
feature_set = features.iloc[:-3,:].transpose() #23 features
target_set = features.iloc[-4:,:].transpose().iloc[:,0] #picking the 23rd feature
X_train,X_test,y_train,y_test = train_test_split(feature_set, target_set, test_size=0.2, shuffle=False, random_state=42)
rnn_units = 256
batch_size = 1
features_dim = 23
output = 1
def build_model(rnn_units):
model = tf.keras.Sequential([
tf.keras.layers.Dense(rnn_units, batch_input_shape=[batch_size, None, features_dim], activation='relu'),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.CuDNNLSTM(rnn_units,
return_sequences=True,
stateful=True),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.CuDNNLSTM(rnn_units,
return_sequences=True,
stateful=True),
tf.keras.layers.Dense(output)
])
return model
model = build_model(rnn_units=rnn_units)
model.compile(optimizer = tf.train.AdamOptimizer(), loss = tf.keras.losses.mean_squared_error, metrics=['mse', 'mae', 'mape', 'cosine'])
reshape_train = int(X_train.values.shape[0]/batch_size)
reshape_test = int(X_test.values.shape[0]/batch_size)
history = model.fit(X_train.values[:reshape_train*batch_size].reshape(reshape_train*batch_size, -1, features_dim), y_train.values[:reshape_train*batch_size].reshape(reshape_train*batch_size, -1, output), epochs=EPOCHS, batch_size=batch_size, validation_data=(X_test.values[:reshape_test*batch_size].reshape(reshape_test*batch_size, 1, features_dim), y_test.values[:reshape_test*batch_size].reshape(reshape_test*batch_size, 1, output)), callbacks=[checkpoint_callback,tensorboard])
如您所见,我正在输入一个包含 23 个值的特征集,并尝试学习第 23 个特征。我在每一层中使用 256 个节点,在开始和结束时使用一个 Dense 布局,然后是 2 个 LSTM 层,然后是 Dropout 层。
我使用均方,因为它应该是时间序列数据的回归。
例如,这是我的一次训练:
Epoch 5/5
10329/10329 [==============================] - 93s 9ms/sample - loss: 0.0182 - mean_squared_error: 0.0182 - mean_absolute_error: 0.0424 - mean_absolute_percentage_error: 94.4916 - cosine_proximity: -0.9032 - val_loss: 0.0193 - val_mean_squared_error: 0.0193 - val_mean_absolute_error: 0.0438 - val_mean_absolute_percentage_error: 58.2152 - val_cosine_proximity: -0.9443
当我跑步时
result = model.predict(feature_set.values.reshape(-1, 1, features_dim))
feature_set.transpose().append(pd.DataFrame(result.reshape(-1), columns = ['Prediction 5min']).set_index(features.columns).transpose()).transpose()
例如,我得到了
2019-03-04 01:00:00 82.0105414589 0.0704929618 -0.1165011768 -0.3369084807 -1.8137642288 -0.2780955060 -4.3090711538 6.2721520391 9.5553857757 -1.2900340169 ... -29.8867675862 1.9178869544 -1.4765772054 1.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0080950060 -0.3594492457 0.0056902645
最后两个值应该相等,但它们是
-0.3594492457 0.0056902645
知道我在模型中做错了什么吗?我可以使用 LSTM 来学习这种关系吗?
谢谢!
【问题讨论】:
标签: tensorflow machine-learning keras lstm