【发布时间】:2020-12-04 11:20:28
【问题描述】:
我正在尝试对单通道图像的磁共振图像进行语义分割。
要从 U-Net 网络获取编码器,我使用此函数:
def get_encoder_unet(img_shape, k_init = 'glorot_uniform', bias_init='zeros'):
inp = Input(shape=img_shape)
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv1_1')(inp)
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv1_2')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool1')(conv1)
conv2 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv2_1')(pool1)
conv2 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv2_2')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool2')(conv2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv3_1')(pool2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv3_2')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool3')(conv3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv4_1')(pool3)
conv4 = Conv2D(256, (4, 4), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv4_2')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool4')(conv4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv5_1')(pool4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same', data_format="channels_last", kernel_initializer=k_init, bias_initializer=bias_init, name='conv5_2')(conv5)
return conv5,conv4,conv3,conv2,conv1,inp
它的总结是:
Model: "encoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 200, 200, 1)] 0
_________________________________________________________________
conv1_1 (Conv2D) (None, 200, 200, 64) 1664
_________________________________________________________________
conv1_2 (Conv2D) (None, 200, 200, 64) 102464
_________________________________________________________________
pool1 (MaxPooling2D) (None, 100, 100, 64) 0
_________________________________________________________________
conv2_1 (Conv2D) (None, 100, 100, 96) 55392
_________________________________________________________________
conv2_2 (Conv2D) (None, 100, 100, 96) 83040
_________________________________________________________________
pool2 (MaxPooling2D) (None, 50, 50, 96) 0
_________________________________________________________________
conv3_1 (Conv2D) (None, 50, 50, 128) 110720
_________________________________________________________________
conv3_2 (Conv2D) (None, 50, 50, 128) 147584
_________________________________________________________________
pool3 (MaxPooling2D) (None, 25, 25, 128) 0
_________________________________________________________________
conv4_1 (Conv2D) (None, 25, 25, 256) 295168
_________________________________________________________________
conv4_2 (Conv2D) (None, 25, 25, 256) 1048832
_________________________________________________________________
pool4 (MaxPooling2D) (None, 12, 12, 256) 0
_________________________________________________________________
conv5_1 (Conv2D) (None, 12, 12, 512) 1180160
_________________________________________________________________
conv5_2 (Conv2D) (None, 12, 12, 512) 2359808
=================================================================
Total params: 5,384,832
Trainable params: 5,384,832
Non-trainable params: 0
_________________________________________________________________
我试图了解神经网络的工作原理,并且我有这段代码来显示最后一层权重和偏差的形状。
layer_dict = dict([(layer.name, layer) for layer in model.layers])
layer_name = model.layers[-1].name
#layer_name = 'conv5_2'
filter_index = 0 # Which filter in this block would you like to visualise?
# Grab the filters and biases for that layer
filters, biases = layer_dict[layer_name].get_weights()
print("Filters")
print("\tType: ", type(filters))
print("\tShape: ", filters.shape)
print("Biases")
print("\tType: ", type(biases))
print("\tShape: ", biases.shape)
有了这个输出:
Filters
Type: <class 'numpy.ndarray'>
Shape: (3, 3, 512, 512)
Biases
Type: <class 'numpy.ndarray'>
Shape: (512,)
我试图理解Filters' shape 的含义是(3, 3, 512, 512)。我认为最后一个512 是该层中filters 的数量,但是(3, 3, 512) 是什么意思? 我的图像是一个通道,所以我不明白过滤器形状中的3, 3(img_shape 是(200, 200, 1))。
【问题讨论】:
-
以这种方式看待它,考虑您的输入是 RGB 图像,并且当您指定 n 个特定大小的过滤器时。实际发生的是 n×3 过滤器被用于对相同的 RGB 图像进行卷积以生成 n 通道图像。这继续并在此处保持 512(filters)x512(channels)
-
@sai 我正在使用一个通道图像,我认为在最后一个卷积层
Conv2D中具有(3, 3)的内核大小是正确的。如果我的代码仅适用于 3 通道图像,我会感到困惑,因为我不明白您为什么说 "... 实际发生的是 n×3 过滤器被用于卷积相同的 RGB 图像以产生 n 通道图片” -
您的代码完美适用于 1 个频道。我选择使用RGB图像解释的唯一原因是因为它在开始时更容易理解。如果在这里您的输入是 3 通道图像,那么过滤器形状将是 (5, 5, 3, 64),这意味着每个通道使用 64 组 5x5 过滤器。另外,请查看此处以获取有关尺寸tensorflow.org/api_docs/python/tf/nn/conv2d 的更多详细信息
标签: python tensorflow keras conv-neural-network