【发布时间】:2018-12-30 23:12:31
【问题描述】:
我有一个正在训练的二元分类问题,我相当成功地通过预训练嵌入传递我的数据,然后并行几个 CNN,汇集结果,然后使用密集层进行预测班上。但是当我改为在 CNN 之后分层一个 RNN 时,训练完全失败了。代码如下(这是一个很长的帖子)。
这是仅适用于 CNN 的有效模型。我的输入是长度为 100 的向量。
inputs=L.Input(shape=(100))
embedding=L.Embedding(input_dim=weights.shape[0],\
output_dim=weights.shape[1],\
input_length=100,\
weights=[weights],\
trainable=False)(inputs)
conv3 = L.Conv1D(m, kernel_size=(3))(dropout)
conv4 = L.Conv1D(m, kernel_size=(4))(dropout)
conv5 = L.Conv1D(m, kernel_size=(5))(dropout)
maxpool3 = L.MaxPool1D(pool_size=(100-3+1, ), strides=(1,))(conv3)
maxpool4 = L.MaxPool1D(pool_size=(100-4+1, ), strides=(1,))(conv4)
maxpool5 = L.MaxPool1D(pool_size=(100-5+1, ), strides=(1,))(conv5)
concatenated_tensor = L.Concatenate(axis=1)([maxpool3,maxpool4,maxpool5])
flattened = L.Flatten()(concatenated_tensor)
output = L.Dense(units=1, activation='sigmoid')(flattened)
这是摘要:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_25 (InputLayer) (None, 100) 0
____________________________________________________________________________________________________
embedding_25 (Embedding) (None, 100, 50) 451300 input_25[0][0]
____________________________________________________________________________________________________
dropout_25 (Dropout) (None, 100, 50) 0 embedding_25[0][0]
____________________________________________________________________________________________________
conv1d_73 (Conv1D) (None, 98, 100) 15100 dropout_25[0][0]
____________________________________________________________________________________________________
conv1d_74 (Conv1D) (None, 97, 100) 20100 dropout_25[0][0]
____________________________________________________________________________________________________
conv1d_75 (Conv1D) (None, 96, 100) 25100 dropout_25[0][0]
____________________________________________________________________________________________________
max_pooling1d_73 (MaxPooling1D) (None, 1, 100) 0 conv1d_73[0][0]
____________________________________________________________________________________________________
max_pooling1d_74 (MaxPooling1D) (None, 1, 100) 0 conv1d_74[0][0]
____________________________________________________________________________________________________
max_pooling1d_75 (MaxPooling1D) (None, 1, 100) 0 conv1d_75[0][0]
____________________________________________________________________________________________________
concatenate_25 (Concatenate) (None, 3, 100) 0 max_pooling1d_73[0][0]
max_pooling1d_74[0][0]
max_pooling1d_75[0][0]
____________________________________________________________________________________________________
flatten_25 (Flatten) (None, 300) 0 concatenate_25[0][0]
____________________________________________________________________________________________________
dense_47 (Dense) (None, 1) 301 flatten_25[0][0]
====================================================================================================
正如我上面所说,这工作得相当好,只需 3-4 个 epoch 即可获得良好的准确性。然而,我的想法是 CNN 识别区域模式,但如果我还想在给定的输入向量中模拟它们在更长距离内如何相互关联,我应该在卷积之后使用某种 RNN 风格。所以我尝试在卷积后更改MaxPooling1D 层的pool_size,删除Flatten,而是将Concatenate 层传递给RNN。例如
maxpool3 = L.MaxPool1D(pool_size=((50,), strides=(1,))(conv3)
maxpool4 = L.MaxPool1D(pool_size=((50,), strides=(1,))(conv4)
maxpool5 = L.MaxPool1D(pool_size=(49,), strides=(1,))(conv5)
concatenated_tensor = L.Concatenate(axis=1)([maxpool3,maxpool4,maxpool5])
rnn=L.SimpleRNN(75)(concatenated_tensor)
output = L.Dense(units=1, activation='sigmoid')(rnn)
现在摘要变成:
max_pooling1d_95 (MaxPooling1D) (None, 50, 100) 0 conv1d_97[0][0]
____________________________________________________________________________________________________
max_pooling1d_96 (MaxPooling1D) (None, 50, 100) 0 conv1d_98[0][0]
____________________________________________________________________________________________________
max_pooling1d_97 (MaxPooling1D) (None, 49, 100) 0 conv1d_99[0][0]
____________________________________________________________________________________________________
concatenate_32 (Concatenate) (None, 149, 100) 0 max_pooling1d_95[0][0]
max_pooling1d_96[0][0]
max_pooling1d_97[0][0]
____________________________________________________________________________________________________
simple_rnn_5 (SimpleRNN) (None, 75) 13200 concatenate_32[0][0]
____________________________________________________________________________________________________
dense_51 (Dense) (None, 1) 76 simple_rnn_5[0][0]
====================================================================================================
当我训练模型时,预测都是完全相同的:class[1] 与 class[0] 的比率。我读过一些论文,人们成功地使用了这个方案,所以很明显我做错了什么,我敢打赌这是一个令人尴尬的愚蠢错误。有人愿意帮忙诊断吗?
【问题讨论】:
-
您是否使用过没有卷积的双向 LSTM?一般来说,如果您使用 LSTM,则不需要这些卷积层。无论如何,我认为这种情况下的问题是您的循环层使用 149 轴作为序列轴,这是您想要做的吗? RNNS 输入:具有形状的 3D 张量(batch_size、timesteps、input_dim)。
-
我已经尝试了几种不同风格的双向 RNN,但效果比这个 CNN 方案差得多,信不信由你。感谢关于 RNN 轴的提示——我一定会研究它。
标签: python tensorflow keras conv-neural-network rnn