【发布时间】:2021-10-15 10:20:43
【问题描述】:
- 有了
print(rnn_forecast.shape) > (3172, 64, 1),为什么我们会得到二维64形状的预测?这是因为我们希望通过在第 2 个 LSTM 层指定 return_sequence=True 来对序列进行排序吗? - 在
final_result=rnn_forecast[split_time-window_size:-1, -1, 0]中,为什么我们在第二维指定-1来得到我们想要的预测图表?
The entire code in google drive
# Define the training data set generator
def windowed_dataset(series, window_size, batch_size, shuffle_buffer_size):
series=tf.expand_dims(series, axis=-1)
wd=tf.data.Dataset.from_tensor_slices(series)
wd=wd.window(window_size+1, shift=1, drop_remainder=True)
wd=wd.flat_map(lambda w : w.batch(window_size+1))
wd=wd.shuffle(shuffle_buffer_size)
wd=wd.map(lambda w : (w[:-1],w[1:]))
return wd.batch(batch_size).prefetch(1)
window_size=64
batch_size=256
shuffle_buffer_size = 1000
train_series=windowed_dataset(train_series, window_size, batch_size, shuffle_buffer_size)
print(train_series.shape)
print(train_series)
> (3000,)
> <PrefetchDataset shapes: ((None, None, 1), (None, None, 1)), types: (tf.float64, tf.float64)>
# Create the model and train it with train_series
model=tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv1D(filters=64, kernel_size=4, strides=1, padding="causal",activation="relu", input_shape=[None, 1]))
model.add(tf.keras.layers.LSTM(32, return_sequences=True))
model.add(tf.keras.layers.LSTM(32, return_sequences=True))
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(8, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.add(tf.keras.layers.Lambda(lambda x : x*400))
optimizer=tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=['mae'])
history=model.fit(train_series, epochs=100)
# Define prediction function
def model_forecast(model, series, window_size):
series=tf.expand_dims(series, axis=-1)
series=tf.expand_dims(series, axis=-1)
wd=tf.data.Dataset.from_tensor_slices(series)
wd=wd.window(window_size, shift=1, drop_remainder=True)
wd=wd.flat_map(lambda w : w.batch(window_size))
wd=wd.batch(32).prefetch(1)
forecast=model.predict(wd)
return forecast
# Prediction with series
rnn_forecast = model_forecast(model, series, window_size)
print(rnn_forecast.shape)
print(rnn_forecast)
> (3172, 64, 1)
> [[[ 95.66096 ]
[112.35001 ]
...
[ 19.893387 ]
[ 21.324263 ]]
...
[[101.16265 ]
[124.68408 ]
...
[ 11.329678 ]
[ 7.8993587 ]]]
final_result=rnn_forecast[split_time-window_size:-1, -1, 0]
print(final_result)
> [135.31732 118.21495 ... 9.162828 11.344096]
plt.figure(figsize=(10, 6))
plot_series(time_val, x_val)
plot_series(time_val, final_result)
预测图
【问题讨论】:
-
在我目前的理解中,1. 预测应该以3维的形式返回,因为我们在LSTM第2层指定了“return_sequence=True”。此外,当我检查 model.summary() 时,输出形状为 (None, None, 64) 。这意味着模型期望输出序列,因此这是序列到序列的模型行为。 2. 如前所述,我们希望输出序列,因此需要指定最后一个元素来绘制与 time_val 相同周期的预测。
-
也许this可以帮到你!
-
谢谢分享。这就是我的假设。
标签: python tensorflow machine-learning lstm recurrent-neural-network