【问题标题】:Keras model concat: Attribute and Value errorKeras模型concat:属性和值错误
【发布时间】:2021-09-14 15:27:42
【问题描述】:

这是我根据 Liu, Gibson, et al 2017 (https://arxiv.org/abs/1708.09022) 的论文制作的 keras 模型。如图1所示。

我有 3 个问题-

  1. 我不确定我是否按照论文正确使用了连接。
  2. 我收到 AttributeError:'KerasTensor' 对象在 model4.add 上没有属性 'add' 变平。之前没有出现此错误
  3. 之前,唯一的错误是 ValueError:Concatenate 层需要具有匹配形状的输入,但 concat 轴除外。得到输入形状:[(None, 310, 1, 16), (None, 310, 1, 32), (None, 310, 1, 64)],我也不知道如何处理。
model1= Sequential()
model2= Sequential()
model3= Sequential()
model4= Sequential()

input_sh = (619,2,1)

model1.add(Convolution1D(filters=16, kernel_size=21, padding='same', activation='LeakyReLU', input_shape=input_sh))
model1.add(MaxPooling2D(pool_size=(2,2), padding='same')) 
model1.add(BatchNormalization())
model1.summary()

model2.add(Convolution1D(filters=32, kernel_size=11, padding='same', activation='LeakyReLU', input_shape= input_sh))
model2.add(MaxPooling2D(pool_size=(2,2), padding='same'))
model2.add(BatchNormalization())
model2.summary()

model3.add(Convolution1D(filters=64, kernel_size=5, padding='same', activation='LeakyReLU', input_shape= input_sh))
model3.add(MaxPooling2D(pool_size=(2,2), padding='same'))
model3.add(BatchNormalization())
model3.summary()

model4 = concatenate([model1.output, model2.output, model3.output], axis= -1)

model4.add(Flatten()) # Line with error
model4.add(Dense(2048, activation='tanh'))
model4.add(Dropout(.5))
model4.add(Dense(len(dic), activation="softmax")) #len(dic) = 19
model4.summary()

输出如下-

Model: "sequential_59"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_45 (Conv1D)           (None, 619, 2, 16)        352       
_________________________________________________________________
max_pooling2d_45 (MaxPooling (None, 310, 1, 16)        0         
_________________________________________________________________
batch_normalization_45 (Batc (None, 310, 1, 16)        64        
=================================================================
Total params: 416
Trainable params: 384
Non-trainable params: 32
_________________________________________________________________
Model: "sequential_60"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_46 (Conv1D)           (None, 619, 2, 32)        384       
_________________________________________________________________
max_pooling2d_46 (MaxPooling (None, 310, 1, 32)        0         
_________________________________________________________________
batch_normalization_46 (Batc (None, 310, 1, 32)        128       
=================================================================
Total params: 512
Trainable params: 448
Non-trainable params: 64
_________________________________________________________________
Model: "sequential_61"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_47 (Conv1D)           (None, 619, 2, 64)        384       
_________________________________________________________________
max_pooling2d_47 (MaxPooling (None, 310, 1, 64)        0         
_________________________________________________________________
batch_normalization_47 (Batc (None, 310, 1, 64)        256       
=================================================================
Total params: 640
Trainable params: 512
Non-trainable params: 128
_________________________________________________________________
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-25-bf7ad914aa4e> in <module>()
     44 model4 = concatenate([model1.output, model2.output, model3.output], axis= -1)
     45 
---> 46 model4.add(Flatten())
     47 model4.add(Dense(2048, activation='tanh'))
     48 model4.add(Dropout(.5))
 
AttributeError: 'KerasTensor' object has no attribute 'add'

【问题讨论】:

    标签: python tensorflow keras deep-learning concatenation


    【解决方案1】:

    您可以使用Functional() API 来解决您的问题(我还没有阅读论文,但这里是您如何组合模型并获得最终输出的方法)。

    为了简单起见,我使用了“relu”激活(确保在 tensorflow 中使用 keras

    下面是应该工作的代码:

    import tensorflow as tf
    from tensorflow.keras import *
    from tensorflow.keras.layers import *
    
    model1= Sequential()
    model2= Sequential()
    model3= Sequential()
    
    input_sh = (619,2,1)
    
    model1.add(Convolution1D(filters=16, kernel_size=21, padding='same', activation='relu', input_shape=input_sh))
    model1.add(MaxPooling2D(pool_size=(2,2), padding='same')) 
    model1.add(BatchNormalization())
    model1.summary()
    
    model2.add(Convolution1D(filters=32, kernel_size=11, padding='same', activation='relu', input_shape= input_sh))
    model2.add(MaxPooling2D(pool_size=(2,2), padding='same'))
    model2.add(BatchNormalization())
    model2.summary()
    
    model3.add(Convolution1D(filters=64, kernel_size=5, padding='same', activation='relu', input_shape= input_sh))
    model3.add(MaxPooling2D(pool_size=(2,2), padding='same'))
    model3.add(BatchNormalization())
    model3.summary()
    
    concatenated = concatenate([model1.output, model2.output, model3.output], axis=-1)
    x = Dense(64, activation='relu')(concatenated)
    x = Flatten()(x)
    x = Dropout(.5)(x)
    x = Dense(19, activation="softmax")(x)
    final_model = Model(inputs=[model1.input,model2.input,model3.input],outputs=x)
    final_model.summary()
    
    
    
    
    
    Model: "functional_3"
    __________________________________________________________________________________________________
    Layer (type)                    Output Shape         Param #     Connected to                     
    ==================================================================================================
    conv1d_15_input (InputLayer)    [(None, 619, 2, 1)]  0                                            
    __________________________________________________________________________________________________
    conv1d_16_input (InputLayer)    [(None, 619, 2, 1)]  0                                            
    __________________________________________________________________________________________________
    conv1d_17_input (InputLayer)    [(None, 619, 2, 1)]  0                                            
    __________________________________________________________________________________________________
    conv1d_15 (Conv1D)              (None, 619, 2, 16)   352         conv1d_15_input[0][0]            
    __________________________________________________________________________________________________
    conv1d_16 (Conv1D)              (None, 619, 2, 32)   384         conv1d_16_input[0][0]            
    __________________________________________________________________________________________________
    conv1d_17 (Conv1D)              (None, 619, 2, 64)   384         conv1d_17_input[0][0]            
    __________________________________________________________________________________________________
    max_pooling2d_15 (MaxPooling2D) (None, 310, 1, 16)   0           conv1d_15[0][0]                  
    __________________________________________________________________________________________________
    max_pooling2d_16 (MaxPooling2D) (None, 310, 1, 32)   0           conv1d_16[0][0]                  
    __________________________________________________________________________________________________
    max_pooling2d_17 (MaxPooling2D) (None, 310, 1, 64)   0           conv1d_17[0][0]                  
    __________________________________________________________________________________________________
    batch_normalization_15 (BatchNo (None, 310, 1, 16)   64          max_pooling2d_15[0][0]           
    __________________________________________________________________________________________________
    batch_normalization_16 (BatchNo (None, 310, 1, 32)   128         max_pooling2d_16[0][0]           
    __________________________________________________________________________________________________
    batch_normalization_17 (BatchNo (None, 310, 1, 64)   256         max_pooling2d_17[0][0]           
    __________________________________________________________________________________________________
    concatenate_5 (Concatenate)     (None, 310, 1, 112)  0           batch_normalization_15[0][0]     
                                                                     batch_normalization_16[0][0]     
                                                                     batch_normalization_17[0][0]     
    __________________________________________________________________________________________________
    dense_5 (Dense)                 (None, 310, 1, 64)   7232        concatenate_5[0][0]              
    __________________________________________________________________________________________________
    flatten_3 (Flatten)             (None, 19840)        0           dense_5[0][0]                    
    __________________________________________________________________________________________________
    dropout_3 (Dropout)             (None, 19840)        0           flatten_3[0][0]                  
    __________________________________________________________________________________________________
    dense_6 (Dense)                 (None, 19)           376979      dropout_3[0][0]                  
    ==================================================================================================
    Total params: 385,779
    Trainable params: 385,555
    Non-trainable params: 224
    

    【讨论】:

    • 感谢您的回复。我能够创建最终模型,但是,当我这样做时,final_model.fit(x, y, validation_data = (tSX, tAX), epochs=50, batch_size=10, verbose=2) 它显示,ValueError: Layer model_1 expects 3 input(s), but it received 1 input tensors. Inputs received: [&lt;tf.Tensor 'IteratorGetNext:0' shape=(None, 619, 2, 1) dtype=float32&gt;]
    • 好的,这是一个完全不同的问题。 Stackoverflow 规则确实建议单独提出每个问题,而不是在“一个”问题下提出多个问题。如果我的回答解决了您的问题,通常会被提出问题的人接受。至于其他问题,请继续提出新问题。
    • 不客气。我是故意告诉你这个的,因为审稿人倾向于关闭过于宽泛/包含太多子问题的问题,我希望新人注意,从而避免删除/关闭他们的问题。
    猜你喜欢
    • 2020-01-20
    • 1970-01-01
    • 2021-07-02
    • 2019-08-07
    • 2021-04-17
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多