【发布时间】:2019-08-29 10:10:12
【问题描述】:
我有一个关于使用 Keras 编码 CNN 的问题。
输入数据(adj)的形状是(20000, 50, 50); 20000 是样本数,50 x 50 是二维数据(如图像)。批量大小为 100。 (其实有两个输入:adj=(20000, 50, 50), features=(20000, 50, 52)。
下发部分如下:
from keras.layers import Conv2D, MaxPool2D, Flatten
adj_visible1 = Input(shape=(50, 50, 1))
conv11 = Conv2D(16, kernel_size=5, activation='relu')(adj_visible1)
pool11 = MaxPool2D(pool_size=(2, 2))(conv11)
conv12 = Conv2D(8, kernel_size=5, activation='relu')(pool11)
pool12 = MaxPool2D(pool_size=(2, 2))(conv12)
flat1 = Flatten()(pool12)
但出现如下错误消息:
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=3
我发现打印相同消息的类似案例,但是,大多数原因是他们没有考虑像 (50, 50) 这样的过滤器,而不是 (50, 50, "1") 输入形状。
在我的例子中,我使用了形状 (50, 50, 1) 而不是 (50, 50)。但是,它仍然会打印相同的错误消息。
我该怎么办?
我附上完整代码如下:
from sklearn.cross_validation import train_test_split
from keras.models import Sequential
from keras.layers.core import Dense, Dropout
from keras.optimizers import RMSprop, Adam, Adadelta
from keras.utils import plot_model
from keras.models import Model
from keras.layers import Input, Flatten, MaxPool2D
from keras.layers.convolutional import Conv2D
from keras.layers.merge import concatenate
from keras.callbacks import CSVLogger
#Settings
epoch = 100
batch_size = 100
test_size = 10000
# Load data
adj = np.load('adj.npy') #(20000, 50, 50)
features = np.load('features.npy') #(20000, 50, 52)
Prop = np.load('Properties.npy') #(20000, 1)
database = np.dstack((adj, features)) #(20000, 50, 102)
#Train/Test split
X_tr, X_te, Y_tr, Y_te = train_test_split(database, Prop, test_size=test_size)
X_tr_adj, X_tr_features = X_tr[:, :, 0:50], X_tr[:, :, 50:]
X_te_adj, X_te_features = X_te[:, :, 0:50], X_te[:, :, 50:]
def create_model():
# first input model
adj_visible1 = Input(shape=(50, 50, 1))
conv11 = Conv2D(16, kernel_size=5, activation='relu')(adj_visible1)
pool11 = MaxPool2D(pool_size=(2, 2))(conv11)
conv12 = Conv2D(8, kernel_size=5, activation='relu')(pool11)
pool12 = MaxPool2D(pool_size=(2, 2))(conv12)
flat1 = Flatten()(pool12)
# second input model
features_visible2 = Input(shape=(50, 52, 1))
conv21 = Conv2D(16, kernel_size=5, activation='relu')(features_visible2)
pool21 = MaxPool2D(pool_size=(2, 2))(conv21)
conv22 = Conv2D(8, kernel_size=5, activation='relu')(pool21)
pool22 = MaxPool2D(pool_size=(2, 2))(conv22)
flat2 = Flatten()(pool22)
# merge input models
merge = concatenate([flat1, flat2])
# interpretation model
hidden1 = Dense(128, activation='relu')(merge)
hidden2 = Dense(32, activation='relu')(hidden1)
output = Dense(1, activation='linear')(hidden2)
model = Model(inputs=[adj_visible1, features_visible2], outputs=output)
model.compile(loss='mean_squared_error', optimizer=Adam())
# summarize layers
print(model.summary())
return model
def train_model(batch_size = 100, nb_epoch = 20):
model = create_model()
csv_logger = CSVLogger('CNN trial.csv')
history = model.fit([X_tr_adj, X_tr_features], Y_tr,
batch_size=batch_size,
epochs=nb_epoch,
verbose=1,
validation_data=([X_te_adj, X_te_features], Y_te),
callbacks=[csv_logger])
predictions_valid = model.predict(X_te_adj, X_te_features, batch_size=batch_size, verbose=1)
return model
train_model(nb_epoch = epoch)
我参考以下材料编写代码:https://machinelearningmastery.com/keras-functional-api-deep-learning/
【问题讨论】:
标签: python-3.x keras deep-learning conv-neural-network dimension