【发布时间】:2020-09-17 09:13:10
【问题描述】:
我正在使用 Tensorflow 2.0 + Keras 上的 CNN-LSTM 模型来执行序列分类。我的模型定义如下:
inp = Input(input_shape)
rshp = Reshape((input_shape[0]*input_shape[1], 1), input_shape=input_shape)(inp)
cnn1 = Conv1D(100, 9, activation='relu')(rshp)
cnn2 = Conv1D(100, 9, activation='relu')(cnn1)
mp1 = MaxPooling1D((3,))(cnn2)
cnn3 = Conv1D(50, 3, activation='relu')(mp1)
cnn4 = Conv1D(50, 3, activation='relu')(cnn3)
gap1 = AveragePooling1D((3,))(cnn4)
dropout1 = Dropout(rate=dropout[0])(gap1)
flt1 = Flatten()(dropout1)
rshp2 = Reshape((input_shape[0], -1), input_shape=flt1.shape)(flt1)
bilstm1 = Bidirectional(LSTM(240,
return_sequences=True,
recurrent_dropout=dropout[1]),
merge_mode=merge)(rshp2)
dense1 = TimeDistributed(Dense(30, activation='relu'))(rshp2)
dropout2 = Dropout(rate=dropout[2])(dense1)
prediction = TimeDistributed(Dense(1, activation='sigmoid'))(dropout2)
model = Model(inp, prediction, name="CNN-bLSTM_per_segment")
print(model.summary(line_length=75))
在哪里input_shape = (60, 60)。但是,此定义会引发以下错误:
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
起初,我以为是因为rshp2 层无法将flt1 输出重塑为(60, X)。所以我在Bidirectional(LSTM))层之前加了一个印版:
print('reshape1: ', rshp.shape)
print('cnn1: ', cnn1.shape)
print('cnn2: ', cnn2.shape)
print('mp1: ', mp1.shape)
print('cnn3: ', cnn3.shape)
print('cnn4: ', cnn4.shape)
print('gap1: ', gap1.shape)
print('flatten 1: ', flt1.shape)
print('reshape 2: ', rshp2.shape)
形状是:
reshape 1: (None, 3600, 1)
cnn1: (None, 3592, 100)
cnn2: (None, 3584, 100)
mp1: (None, 1194, 100)
cnn3: (None, 1192, 50)
cnn4: (None, 1190, 50)
gap1: (None, 396, 50)
flatten 1: (None, 19800)
reshape 2: (None, 60, None)
查看flt1 层,它的输出形状是(19800,),可以重新调整为(60, 330),但由于某种原因,rshp2 层的(60, -1) 没有按预期工作,证明打印reshape 2: (None, 60, None)。当我尝试重塑为(60, 330) 时,它工作得很好。有谁知道为什么(-1) 不起作用?
【问题讨论】:
标签: python tensorflow keras tensorflow2.0