【问题标题】:Finding the right input & output shape for a conv1D Keras NN为 conv1D Keras NN 找到正确的输入和输出形状
【发布时间】:2020-12-02 10:26:14
【问题描述】:

我构建了一个 LSTM 模型来分析时间序列,输入矩阵 X 的大小为 (1750, 20, 28),具有 1750 个长度序列 20 和 28 个特征。实际上,我使用具有 28 个特征的原始 X 矩阵并创建一个滑动窗口长度为 20 的 3D 矩阵。y 矩阵的大小为 (1750,) 我成功地将它与 LTSM 一起使用(输入形状 = X_train[1],X_train[2])

它适用于第一层 model.add(layer_LSTM1) 或堆叠 LSTM,但效果不佳(如果我运行两次相同的 NN,则非常不稳定)。 然后我尝试在相同的数据集上应用一个 conv1D NN,具有相同的输入形状。我收到一条错误消息,如下所示。 这是模型定义和消息:

# available layers
layer_drop = keras.layers.Dropout(rate = dropout)
layer_dense1 = Dense(units= layer_1, activation = 'relu')
layer_LSTM1 = keras.layers.LSTM(units=layer_1, activation = 'relu' , return_sequences = False, input_shape=(X_train.shape[1], X_train.shape[2]))
layer_LSTMstack1 = keras.layers.LSTM(units=layer_2, activation = 'relu' , return_sequences = True, input_shape=(X_train.shape[1], X_train.shape[2]))
layer_LSTMstack2 = keras.layers.LSTM(units=layer_2, activation = 'relu' , return_sequences = True)
layer_LSTMstackend = keras.layers.LSTM(units=layer_2, activation = 'relu')
layer_conv1D1 = keras.layers.Conv1D(filters = 28, kernel_size= 3, activation = 'relu', input_shape=(X_train.shape[1], X_train.shape[2]))
layer_output = Dense(units = 1)

# Model architecture 
model.add(layer_conv1D1)
model.add(layer_dense1)
model.add(layer_output)

我收到以下反馈(我在其中添加了 model.summary()

 runfile('C:/GD/AI/Conv1D_1stock.py', wdir='C:/GD/AI')
Reloaded modules: util_prepa, util_model, util_DENSE
Time preparing data =  Time: 3.784785270690918
Traceback (most recent call last):

  File "C:\GD\AI\Conv1D_1stock.py", line 133, in <module>
    model, history = compile_train_model(model, loss, optimizer, X_train, y_train, epochs, batch_size, validation_split, verbose)

  File "C:\GD\AI\util_LSTM.py", line 89, in compile_train_model
    history = model.fit(X_train, y_train, epochs = epochs, batch_size = batch_size, validation_split = validation_split, verbose = verbose)

  File "C:\Users\Nav\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 709, in fit
    shuffle=shuffle)

  File "C:\Users\Nav\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2692, in _standardize_user_data
    y, self._feed_loss_fns, feed_output_shapes)

  File "C:\Users\Nav\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_utils.py", line 549, in check_loss_and_target_compatibility
    ' while using as loss `' + loss_name + '`. '

ValueError: A target array with shape (1750, 1) was passed for an output of shape (None, 18, 1) while using as loss `mean_squared_error`. This loss expects targets to have the same shape as the output.


print(model.summary())
Model: "sequential_10"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_7 (Conv1D)            (None, 18, 28)            2380      
_________________________________________________________________
dense_26 (Dense)             (None, 18, 128)           3712      
_________________________________________________________________
dense_28 (Dense)             (None, 18, 1)             129       
=================================================================
Total params: 6,221
Trainable params: 6,221
Non-trainable params: 0
_________________________________________________________________

我做错了什么?有人可以告诉我正确的方向吗? 提前非常感谢。

注意。根据要求,这是我使用的参数(从一开始的代码 - 对不起,如果它很长):

# ====  PART 0. Installing libraries ============
import numpy as np
import pandas as pd
import sqlite3 as sq
import time
from itertools import chain
import tensorflow as tf
from tensorflow import keras
import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
from tensorflow.keras.layers import Bidirectional, Dropout, Activation, Dense, LSTM, Flatten, ConvLSTM2D
from tensorflow.python.keras.layers import CuDNNLSTM
from tensorflow.keras.models import Sequential
from sklearn.metrics import confusion_matrix
from util_prepa import *
from util_model import *
from util_LSTM import *

start_time = time.time()
rcParams['figure.figsize'] = 14, 8

### ====   PART 0.A Defining hyeprparameters & parameters  =  INPUT REQUIRED ============
## SQL parameters
dbInput = 'Inputlist.db'           ### Database with input data
dbList = "TRlistInput"              ### table with list of datasets
ric = "ATOS"                        ### RIC code of the underlying item
dbOutput = 'saveLSTMoutput.db'       ### Database for saving output
saveX = "savX"                     ### Table for saving X output in dbOutput
saveY = "savY"                     ### Table for saving Y output in dbOutput

## Dataset parameters
horiz = 10                          ### time horizon of the prediction 
seq_length = 20                     ### number of days for enriching the LSTM
step = 1                            ### time lag within LSTM memory batch

tested_model = 'Conv1D'           ### 'LSTM' / 'STACKED' / 'ConvLSTM' / 'BAYES' / 'Conv1D' / 'Conv2D' / DEEP'

## Parameters LSTM & CNN
drop_rows = 50                      ### Number of unrelevant rows given technical indicators computation
lstmStart = 0                    ### initial value of X and Y matrices out of the total dataset
lstmSize = 2000                     ### length of the X & Y matrices starting from lstmStart index
proportionTrain = 0.875 

X_plot = 0                          ### 1 for plot close price  /  0 for no plot

### ====   PART 1.A Connecting to SQL DB and loading lists ============
dataX, dataY = get_model_data(dbInput, dbList, ric, horiz, drop_rows)
dataX = get_model_cleanXset(dataX, trigger)                             # Clean X matrix for insufficient data
Xs, ys = LSTM_create_dataset(dataX, dataY, seq_length, step)

(X_train, y_train), (X_test, y_test), (res_train, res_test) = LSTM_train_test_size(Xs, ys, lstmStart, lstmSize, proportionTrain)
(X_train, X_test), (train_mean, train_std) = get_model_scaleX(X_train, X_test)

### ====   PART 2.B Input & define Model  =  INPUT REQUIRED ============
## Model & Hyper-parameters
validation_split = 0.1
model = keras.Sequential()
dropout = 0.1
optimizer = 'adam'               ### Optimizer of the compiled model
learning = 0.001
loss = 'mean_squared_error'
verbose = 0                      ### 0 = hidden computation  //  1 = computation printed
batch_size = 32
epochs = 15
layer_1 = 128
layer_2 = 256

# available layers
layer_drop = keras.layers.Dropout(rate = dropout)
layer_dense1 = Dense(units= layer_1, activation = 'relu')
layer_dense2 = Dense(units= layer_2, activation = 'relu')
layer_LSTM1 = keras.layers.LSTM(units=layer_1, activation = 'relu' , return_sequences = False, input_shape=(X_train.shape[1], X_train.shape[2]))
layer_LSTMstack1 = keras.layers.LSTM(units=layer_2, activation = 'relu' , return_sequences = True, input_shape=(X_train.shape[1], X_train.shape[2]))
layer_LSTMstack2 = keras.layers.LSTM(units=layer_2, activation = 'relu' , return_sequences = True)
layer_LSTMstackend = keras.layers.LSTM(units=layer_2, activation = 'relu')
layer_conv1D1 = keras.layers.Conv1D(filters = 28, kernel_size= 3, activation = 'relu', input_shape=(X_train.shape[1], X_train.shape[2]))
layer_output = Dense(units = 1)

# Model architecture 
model.add(layer_conv1D1)
model.add(layer_dense1)
model.add(layer_output)
model_arch = 'LSTM1-128+D1-128+Out'

### ====   PART 4.B Compile and Train model + predict   ============
model, history = compile_train_model(model, loss, optimizer, X_train, y_train, epochs, batch_size, validation_split, verbose)
eval_train, eval_test, y_pred = model_predict(model, history, X_train, y_train, X_test, y_test, res_test)

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

【问题讨论】:

  • 你把失败的那行贴出来了吗?当您执行 model.fit 时,我预计会出现该错误?
  • 是的,写一个输出错误的代码。
  • 其实我也不知道哪里失败了。我可以添加我用作附录的参数
  • 这是我运行程序时的错误行:

标签: python tensorflow keras input conv-neural-network


【解决方案1】:

首先,让我们讨论输入形状。我对“大小为 (1750, 20, 28) 的输入矩阵 X” 的解释意味着您的 Batch Size 为 1750,即 20 个时间步长的一维系列,每个时间步长有 28 个特征。

当您添加卷积层时,Batch Size 保持不变,时间步数通常保持不变(这取决于您的过滤器如何适应时间步长)并且您输出的特征数量将等于过滤器的数量你用过。

因此,当您在卷积之后添加一个密集层时,您添加的是一个 2D 密集层(每个时间步将相同的权重应用于每个特征向量)。为避免这种情况,您需要在代码中的某处添加keras.layers.Flatten()。 Flatten 会将您的 2D 卷积输出转换为 1D。为了达到我认为你想要的效果,我会像这样修改代码

model.add(layer_conv1D1)
model.add(layer_dense1)
model.add(keras.layers.Flatten())
model.add(layer_output)

【讨论】:

  • 这正是正确的答案(但扁平化在密集层之前)谢谢。很多!
  • 我做到了,但由于我是新手,投票问题很少,所以我的投票不会公开显示。对此感到抱歉
  • 不必为遵守规则而感到抱歉!
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 2019-09-21
  • 1970-01-01
  • 2019-05-25
  • 2019-01-29
  • 2020-09-04
  • 1970-01-01
  • 2019-02-27
相关资源
最近更新 更多