【问题标题】:Mobilenet: Transfer learning with GradcamMobilenet:使用 Gradcam 进行迁移学习
【发布时间】:2023-01-18 14:41:53
【问题描述】:

我是这一切的新手所以请善待这个问题:)

我想做的是使用迁移学习技术训练 Mobilenet 分类器,然后实施 Gradcam 技术来了解我的模型正在研究的内容。

  1. 我创建了一个模型
    input_layer = tf.keras.layers.Input(shape=IMG_SHAPE)
    x = preprocess_input(input_layer)
    y = base_model(x)
    y = tf.keras.layers.GlobalAveragePooling2D()(y)
    y = tf.keras.layers.Dropout(0.2)(y)
    outputs = tf.keras.layers.Dense(5)(y)
    model = tf.keras.Model(inputs=input_layer, outputs=outputs)
    model.summary()
    

    型号概要:

    Model: "functional_2"
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    input_3 (InputLayer)         [(None, 224, 224, 3)]     0         
    _________________________________________________________________
    tf_op_layer_RealDiv_1 (Tenso [(None, 224, 224, 3)]     0         
    _________________________________________________________________
    tf_op_layer_Sub_1 (TensorFlo [(None, 224, 224, 3)]     0         
    _________________________________________________________________
    mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280)        2257984   
    _________________________________________________________________
    global_average_pooling2d_1 ( (None, 1280)              0         
    _________________________________________________________________
    dropout_1 (Dropout)          (None, 1280)              0         
    _________________________________________________________________
    dense_1 (Dense)              (None, 5)                 6405      
    =================================================================
    Total params: 2,264,389
    Trainable params: 6,405
    Non-trainable params: 2,257,984
    _________________________________________________________________
    
    1. 将其传递给 grad cam 算法,但 grad cam 算法无法找到最后一个卷积层

    合理的解决方案:如果我可以在模型中添加未包装的 mobilenet 层而不是封装的 'mobilenetv2_1.00_224' 层,grad cam 算法将能够找到最后一层

    问题

    我无法创建可以将数据增强和预处理层添加到 mobilenet 展开层的模型。

    提前致谢

    问候 安吉特

【问题讨论】:

  • 我目前有完全相同的问题。你找到解决办法了吗?
  • @Skruff 是的,我能够解决这个问题,在下面的 anser 中发布了 sn-p

标签: tensorflow keras transfer-learning mobilenet


【解决方案1】:

@skruff 看看有没有帮助

def make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=None):
# First, we create a model that maps the input image to the activations
# of the last conv layer as well as the output predictions
grad_model = tf.keras.models.Model(
    [model.inputs], [model.get_layer(last_conv_layer_name).output, model.output]
)

# Then, we compute the gradient of the top predicted class for our input image
# with respect to the activations of the last conv layer
with tf.GradientTape() as tape:
    last_conv_layer_output, preds = grad_model(img_array)
    if pred_index is None:
        pred_index = tf.argmax(preds[0])
    class_channel = preds[:, pred_index]

# This is the gradient of the output neuron (top predicted or chosen)
# with regard to the output feature map of the last conv layer
grads = tape.gradient(class_channel, last_conv_layer_output)

# This is a vector where each entry is the mean intensity of the gradient
# over a specific feature map channel
pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))

# We multiply each channel in the feature map array
# by "how important this channel is" with regard to the top predicted class
# then sum all the channels to obtain the heatmap class activation
last_conv_layer_output = last_conv_layer_output[0]
heatmap = last_conv_layer_output @ pooled_grads[..., tf.newaxis]
heatmap = tf.squeeze(heatmap)

# For visualization purpose, we will also normalize the heatmap between 0 & 1
heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap)
return heatmap.numpy()

【讨论】:

    猜你喜欢
    • 2020-08-06
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2018-05-31
    • 2016-05-18
    • 2019-11-08
    • 2020-10-10
    • 2018-04-12
    相关资源
    最近更新 更多