【问题标题】:ValueError: Negative dimension size caused by subtracting 22 from 1 for 'conv3d_3/convolution' (op: 'Conv3D')ValueError:“conv3d_3/convolution”(操作:“Conv3D”)从 1 中减去 22 导致的负尺寸大小
【发布时间】:2025-12-24 10:05:10
【问题描述】:

我在 Keras 中声明输入层时收到此错误消息。

Traceback(最近一次调用最后一次):

文件“E:/physionet/CNN_onemodel.py”,第 150 行,在 createModel model.add(Conv3D(16, (22, 5, 5), strides=(1, 2, 2), padding='valid',activation='relu',data_format= "channels_last", input_shape=input_shape))

ValueError: 负维度大小由 输入形状为 [?,1,22,5,3844], [22,5,5,3844,16] 的“conv3d_3/convolution”(操作:“Conv3D”)从 1 中减去 22。

感谢任何帮助。

代码:

    input_shape=(1, 22, 5, 3844)
    model = Sequential()
    #C1
    model.add(Conv3D(16, (22, 5, 5), strides=(1, 2, 2), padding='valid',activation='relu',data_format= "channels_first", input_shape=input_shape))
    model.add(keras.layers.MaxPooling3D(pool_size=(1, 2, 2),data_format= "channels_first",  padding='same'))
    model.add(BatchNormalization())
    #C2
    model.add(Conv3D(32, (1, 3, 3), strides=(1, 1,1), padding='valid',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling3D(pool_size=(1,2, 2),data_format= "channels_first", ))
    model.add(BatchNormalization())

     #C3
    model.add(Conv3D(64, (1,3, 3), strides=(1, 1,1), padding='valid',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(keras.layers.MaxPooling3D(pool_size=(1,2, 2),data_format= "channels_first", ))
    model.add(BatchNormalization())

    model.add(Flatten())
    model.add(Dropout(0.5))
    model.add(Dense(256, activation='sigmoid'))
    model.add(Dropout(0.5))
    model.add(Dense(2, activation='softmax'))

    opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])

【问题讨论】:

    标签: python tensorflow keras deep-learning conv-neural-network


    【解决方案1】:

    如果你设置padding = "valid"(默认行为),意味着在卷积/最大池化过程中会发生自动降维,你会得到负维度。为确保在执行卷积/最大池化后获得与您需要在指定 Conv3D 和 MaxPooling3D 层时设置 padding=same 相同的维度。

    import tensorflow as tf
    from tensorflow.keras.models import Sequential
    from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
    from tensorflow.keras.layers import Conv3D, MaxPooling3D, BatchNormalization
    import numpy as np
    
    input_shape=(1, 22, 5, 3844)
    model = Sequential()
        #C1
    model.add(Conv3D(16, (22, 5, 5), strides=(1, 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
    model.add(MaxPooling3D(pool_size=(1, 2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
        #C2
    model.add(Conv3D(32, (1, 3, 3), strides=(1, 1, 1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(MaxPooling3D(pool_size=(1, 2, 2),data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
         #C3
    model.add(Conv3D(64, (1, 3, 3), strides=(1, 1, 1), padding='same',data_format= "channels_first",  activation='relu'))#incertezza se togliere padding
    model.add(MaxPooling3D(pool_size=(1, 2, 2), data_format= "channels_first", padding='same'))
    model.add(BatchNormalization())
    
    model.add(Flatten())
    model.add(Dropout(0.5))
    model.add(Dense(256, activation='sigmoid'))
    model.add(Dropout(0.5))
    model.add(Dense(2, activation='softmax'))
    
    opt_adam = tf.keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
    
    print(model.summary())
    

    输出:

    Model: "sequential"
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    conv3d (Conv3D)              (None, 16, 22, 3, 1922)   8816      
    _________________________________________________________________
    max_pooling3d (MaxPooling3D) (None, 16, 22, 2, 961)    0         
    _________________________________________________________________
    batch_normalization (BatchNo (None, 16, 22, 2, 961)    3844      
    _________________________________________________________________
    conv3d_1 (Conv3D)            (None, 32, 22, 2, 961)    4640      
    _________________________________________________________________
    max_pooling3d_1 (MaxPooling3 (None, 32, 22, 1, 481)    0         
    _________________________________________________________________
    batch_normalization_1 (Batch (None, 32, 22, 1, 481)    1924      
    _________________________________________________________________
    conv3d_2 (Conv3D)            (None, 64, 22, 1, 481)    18496     
    _________________________________________________________________
    max_pooling3d_2 (MaxPooling3 (None, 64, 22, 1, 241)    0         
    _________________________________________________________________
    batch_normalization_2 (Batch (None, 64, 22, 1, 241)    964       
    _________________________________________________________________
    flatten (Flatten)            (None, 339328)            0         
    _________________________________________________________________
    dropout (Dropout)            (None, 339328)            0         
    _________________________________________________________________
    dense (Dense)                (None, 256)               86868224  
    _________________________________________________________________
    dropout_1 (Dropout)          (None, 256)               0         
    _________________________________________________________________
    dense_1 (Dense)              (None, 2)                 514       
    =================================================================
    Total params: 86,907,422
    Trainable params: 86,904,056
    Non-trainable params: 3,366
    _________________________________________________________________
    

    【讨论】:

    • @gigi,如果我已经回答了您的问题,请您接受答案并为答案投票。谢谢
    • 我想知道自动降维,为什么padding = "valid" 会导致降维??
    • @gigi,请参考此链接*.com/questions/60323897/…