【问题标题】:How to properly set input_shape in a keras neural network?如何在 keras 神经网络中正确设置 input_shape?
【发布时间】:2019-12-29 14:38:56
【问题描述】:

基本上,我有 3844 个 15x2 矩阵,每个矩阵都分配给一个二进制目标。所以,

X_train shape is (3844, 15, 2)
y_train shape is (3844, 1)

我有以下神经网络:

model = Sequential()
model.add(Dense(16, activation = 'relu', input_shape = (15, 2)))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
epochs_hist = model.fit(X_train, y_train, epochs = 1000, batch_size = 4)

模型总结为:

model.sumary()

Layer (type)                 Output Shape              Param #   

=================================================================

dense_1 (Dense)              (None, 15, 16)            48        

_________________________________________________________________

dense_2 (Dense)              (None, 15, 16)            272       

_________________________________________________________________

dense_3 (Dense)              (None, 15, 1)             17        

=================================================================

Total params: 337

Trainable params: 337

Non-trainable params: 0

产生的错误是:ValueError: Error when checks target: expected dense_3 to have 3 dimensions, but got array with shape (3844, 1).

哪里出错了?

编辑(完整代码):

window = 15
ret = 0.06

df = load_data()

X = []
y = []
for i in range(len(dataset) - window):
    aux = dataset[i+1: i+window+1, 0:2]
    X.append(dataset[i+1: i+window+1, 0:2])
    if (aux.max()/dataset[i, 0] - 1 >= ret) and (dataset[i, 0]/aux.min() - 1 < ret):
        y.append(1)
    else:
        y.append(0)

X, y = np.array(X), np.array(y)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 6)

scaler = MinMaxScaler(feature_range = (0, 1))
X_train[:, :, 0] = scaler.fit_transform(X_train[:, :, 0])
X_test[:, :, 0] = scaler.transform(X_test[:, :, 0])

y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
model.add(Dense(16, input_shape = (15, 2), activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
model.summary()
model.compile(optimizer = opt, loss = 'binary_crossentropy', metrics = ['accuracy'])
epochs_hist = model.fit(X_train, y_train, epochs = 1000, batch_size = 4, validation_data = (X_test, y_test))

Example data:

X_train:

0.298146    3.8201e+07
0.287518    2.49463e+07
0.282136    3.17904e+07
0.269095    3.90852e+07
0.262679    6.39347e+07
0.252278    4.25771e+07
0.242393    4.05355e+07
0.246326    3.20741e+07
0.247361    2.98584e+07
0.252122    2.64514e+07
0.247775    3.39687e+07

y_train:

1   0
0   1
0   1
0   1
0   1
1   0
0   1
0   1
0   1
0   1

【问题讨论】:

  • 您在处理图像吗??
  • 在 y_train 数组上使用 np.expand_dims()。
  • 不,我不使用图像。

标签: python keras keras-layer


【解决方案1】:

由于您将矩阵作为输入传递,因此您需要将它们展平以使其与上游层兼容。我在下面修改了您的代码:

样本数据

X_train = np.random.normal(size=(3844, 15, 2))
y_train = np.random.binomial(n=1,p=0.5,size = (3844,1))

您的代码

model = Sequential()
model.add(Flatten(input_shape=(15, 2)))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
epochs_hist = model.fit(X_train, y_train, epochs = 3, batch_size = 4)

样本输出

Epoch 1/3
3844/3844 [==============================] - 1s 281us/step - loss: 0.4537 - acc: 0.7765
Epoch 2/3
3844/3844 [==============================] - 1s 282us/step - loss: 0.4483 - acc: 0.7854
Epoch 3/3
3844/3844 [==============================] - 1s 281us/step - loss: 0.4496 - acc: 0.7838

希望这会有所帮助!

【讨论】:

  • 我之前试过这个并且它运行了,但是我的网络没有学习,损失函数在所有 1000 个时期保持相同的值。
  • 当然。我用代码和示例编辑了这个问题。感谢您的帮助!
猜你喜欢
  • 2015-02-10
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2021-01-21
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多