【问题标题】:Keras InvalidArgumentError in first convolutional block of UNET with 2 channel image具有 2 通道图像的 UNET 的第一个卷积块中的 Keras InvalidArgumentError
【发布时间】:2021-01-23 21:27:49
【问题描述】:

我正在尝试训练具有 (256,256,2) 输入的 2D UNET 卷积网络,以预测相同维度的输出。我在准备要加载的训练数据时遇到了一些问题,我认为这是问题所在,但模型摘要告诉我错误发生在第一个卷积层中。所有过滤器都是 (3,3)。我该如何解决这个问题?

我认为我可能需要使用 3D 层而不是 2D,但我看过使用 RGB 图像和 2D 卷积层的教程,所以我认为这不是问题。

InvalidArgumentError:  input depth must be evenly divisible by filter depth: 3 vs 2
     [[node model_7/conv2d_105/BiasAdd (defined at <ipython-input-9-957db582cfa0>:171) ]] [Op:__inference_train_function_19507]

建筑

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_8 (InputLayer)            [(None, 256, 256, 2) 0                                            
__________________________________________________________________________________________________
conv2d_105 (Conv2D)             (None, 256, 256, 16) 304         input_8[0][0]                    
__________________________________________________________________________________________________
batch_normalization_98 (BatchNo (None, 256, 256, 16) 64          conv2d_105[0][0]                 
__________________________________________________________________________________________________
activation_98 (Activation)      (None, 256, 256, 16) 0           batch_normalization_98[0][0]     
__________________________________________________________________________________________________
conv2d_106 (Conv2D)             (None, 256, 256, 16) 2320        activation_98[0][0]              
__________________________________________________________________________________________________
batch_normalization_99 (BatchNo (None, 256, 256, 16) 64          conv2d_106[0][0]                 
__________________________________________________________________________________________________
activation_99 (Activation)      (None, 256, 256, 16) 0           batch_normalization_99[0][0]     
__________________________________________________________________________________________________
average_pooling2d_21 (AveragePo (None, 128, 128, 16) 0           activation_99[0][0]              
__________________________________________________________________________________________________
conv2d_107 (Conv2D)             (None, 128, 128, 32) 4640        average_pooling2d_21[0][0]       
__________________________________________________________________________________________________
batch_normalization_100 (BatchN (None, 128, 128, 32) 128         conv2d_107[0][0]                 
__________________________________________________________________________________________________
activation_100 (Activation)     (None, 128, 128, 32) 0           batch_normalization_100[0][0]    
__________________________________________________________________________________________________
conv2d_108 (Conv2D)             (None, 128, 128, 32) 9248        activation_100[0][0]             
__________________________________________________________________________________________________
batch_normalization_101 (BatchN (None, 128, 128, 32) 128         conv2d_108[0][0]                 
__________________________________________________________________________________________________
activation_101 (Activation)     (None, 128, 128, 32) 0           batch_normalization_101[0][0]    
__________________________________________________________________________________________________
average_pooling2d_22 (AveragePo (None, 64, 64, 32)   0           activation_101[0][0]             
__________________________________________________________________________________________________
conv2d_109 (Conv2D)             (None, 64, 64, 64)   18496       average_pooling2d_22[0][0]       
__________________________________________________________________________________________________
batch_normalization_102 (BatchN (None, 64, 64, 64)   256         conv2d_109[0][0]                 
__________________________________________________________________________________________________
activation_102 (Activation)     (None, 64, 64, 64)   0           batch_normalization_102[0][0]    
__________________________________________________________________________________________________
conv2d_110 (Conv2D)             (None, 64, 64, 64)   36928       activation_102[0][0]             
__________________________________________________________________________________________________
batch_normalization_103 (BatchN (None, 64, 64, 64)   256         conv2d_110[0][0]                 
__________________________________________________________________________________________________
activation_103 (Activation)     (None, 64, 64, 64)   0           batch_normalization_103[0][0]    
__________________________________________________________________________________________________
average_pooling2d_23 (AveragePo (None, 32, 32, 64)   0           activation_103[0][0]             
__________________________________________________________________________________________________
conv2d_111 (Conv2D)             (None, 32, 32, 128)  73856       average_pooling2d_23[0][0]       
__________________________________________________________________________________________________
batch_normalization_104 (BatchN (None, 32, 32, 128)  512         conv2d_111[0][0]                 
__________________________________________________________________________________________________
activation_104 (Activation)     (None, 32, 32, 128)  0           batch_normalization_104[0][0]    
__________________________________________________________________________________________________
conv2d_112 (Conv2D)             (None, 32, 32, 128)  147584      activation_104[0][0]             
__________________________________________________________________________________________________
batch_normalization_105 (BatchN (None, 32, 32, 128)  512         conv2d_112[0][0]                 
__________________________________________________________________________________________________
activation_105 (Activation)     (None, 32, 32, 128)  0           batch_normalization_105[0][0]    
__________________________________________________________________________________________________
conv2d_transpose_21 (Conv2DTran (None, 64, 64, 64)   32832       activation_105[0][0]             
__________________________________________________________________________________________________
concatenate_21 (Concatenate)    (None, 64, 64, 128)  0           conv2d_transpose_21[0][0]        
                                                                 activation_103[0][0]             
__________________________________________________________________________________________________
conv2d_113 (Conv2D)             (None, 64, 64, 64)   73792       concatenate_21[0][0]             
__________________________________________________________________________________________________
batch_normalization_106 (BatchN (None, 64, 64, 64)   256         conv2d_113[0][0]                 
__________________________________________________________________________________________________
activation_106 (Activation)     (None, 64, 64, 64)   0           batch_normalization_106[0][0]    
__________________________________________________________________________________________________
conv2d_114 (Conv2D)             (None, 64, 64, 64)   36928       activation_106[0][0]             
__________________________________________________________________________________________________
batch_normalization_107 (BatchN (None, 64, 64, 64)   256         conv2d_114[0][0]                 
__________________________________________________________________________________________________
activation_107 (Activation)     (None, 64, 64, 64)   0           batch_normalization_107[0][0]    
__________________________________________________________________________________________________
conv2d_transpose_22 (Conv2DTran (None, 128, 128, 32) 8224        activation_107[0][0]             
__________________________________________________________________________________________________
concatenate_22 (Concatenate)    (None, 128, 128, 64) 0           conv2d_transpose_22[0][0]        
                                                                 activation_101[0][0]             
__________________________________________________________________________________________________
conv2d_115 (Conv2D)             (None, 128, 128, 32) 18464       concatenate_22[0][0]             
__________________________________________________________________________________________________
batch_normalization_108 (BatchN (None, 128, 128, 32) 128         conv2d_115[0][0]                 
__________________________________________________________________________________________________
activation_108 (Activation)     (None, 128, 128, 32) 0           batch_normalization_108[0][0]    
__________________________________________________________________________________________________
conv2d_116 (Conv2D)             (None, 128, 128, 32) 9248        activation_108[0][0]             
__________________________________________________________________________________________________
batch_normalization_109 (BatchN (None, 128, 128, 32) 128         conv2d_116[0][0]                 
__________________________________________________________________________________________________
activation_109 (Activation)     (None, 128, 128, 32) 0           batch_normalization_109[0][0]    
__________________________________________________________________________________________________
conv2d_transpose_23 (Conv2DTran (None, 256, 256, 16) 2064        activation_109[0][0]             
__________________________________________________________________________________________________
concatenate_23 (Concatenate)    (None, 256, 256, 32) 0           conv2d_transpose_23[0][0]        
                                                                 activation_99[0][0]              
__________________________________________________________________________________________________
conv2d_117 (Conv2D)             (None, 256, 256, 16) 4624        concatenate_23[0][0]             
__________________________________________________________________________________________________
batch_normalization_110 (BatchN (None, 256, 256, 16) 64          conv2d_117[0][0]                 
__________________________________________________________________________________________________
activation_110 (Activation)     (None, 256, 256, 16) 0           batch_normalization_110[0][0]    
__________________________________________________________________________________________________
conv2d_118 (Conv2D)             (None, 256, 256, 16) 3856        activation_110[0][0]             
__________________________________________________________________________________________________
batch_normalization_111 (BatchN (None, 256, 256, 16) 64          conv2d_118[0][0]                 
__________________________________________________________________________________________________
activation_111 (Activation)     (None, 256, 256, 16) 0           batch_normalization_111[0][0]    
__________________________________________________________________________________________________
conv2d_119 (Conv2D)             (None, 256, 256, 2)  34          activation_111[0][0]  

第一个转换块的模型代码

    padding = 'same'
    filter = (3,3)
    inputs = Input((256, 256, 2))
    k_init = 'he_normal'
    activate = 'tanh'
    
    c1 = Conv2D(16 * factor, filter, kernel_initializer = k_init, padding = padding) (inputs)
    c1 = Activation(activate) (c1)

【问题讨论】:

    标签: python keras conv-neural-network


    【解决方案1】:

    找出错误所在。

    我的自定义生成器返回的是 (batch_size, 256, 512, 1) 数组,而不是 (batch_size, 256, 256, 2)。一旦我纠正了这个问题,网络就会继续对数据进行训练

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2018-04-07
      • 2016-09-02
      • 2022-01-20
      • 1970-01-01
      • 1970-01-01
      • 2017-10-29
      • 1970-01-01
      相关资源
      最近更新 更多