【问题标题】:Multi-class image classification, how to load mask多类图像分类,如何加载掩码
【发布时间】:2019-03-23 03:21:11
【问题描述】:

我是 Keras 的新手,才刚工作几天,还很缺乏经验。

我能够训练一个与一个类一起工作的 U-Net 网络,然后输入一个 RGB 图像和一个灰度掩码进行训练,代码如下:

def train_generator():

    while True:
        for start in range(0, len(ids_train_split), batch_size):
            x_batch = []
            y_batch = []
            end = min(start + batch_size, len(ids_train_split))
            ids_train_batch = ids_train_split[start:end]
            for id in ids_train_batch.values:

                img_name = 'IMG_'+str(id).split('_')[2]
                image_path = os.path.join("input", "train", "{}.JPG".format(str(img_name)))
                mca_mask_path = os.path.join("input", "train_mask", "{}.png".format(id))

                img = cv2.imread(image_path)
                img = cv2.resize(img, (input_size, input_size))

                mask_mca = cv2.imread(mca_mask_path, cv2.IMREAD_GRAYSCALE)
                mask_mca = cv2.resize(mask_mca, (input_size, input_size))

                img = randomHueSaturationValue(img,
                                               hue_shift_limit=(-50, 50),
                                               sat_shift_limit=(-5, 5),
                                               val_shift_limit=(-15, 15))
                img, mask = randomShiftScaleRotate(img, mask,
                                                   shift_limit=(-0.0625, 0.0625),
                                                   scale_limit=(-0.1, 0.1),
                                                   rotate_limit=(-0, 0))
                img, mask = randomHorizontalFlip(img, mask)
                mask = np.expand_dims(mask, axis=2)
                x_batch.append(img)
                y_batch.append(mask)
            x_batch = np.array(x_batch, np.float32) / 255
            y_batch = np.array(y_batch, np.float32) / 255
            yield x_batch, y_batch

这是我的 U-Net 模型:

def get_unet_1(pretrained_weights=None, input_shape=(1024, 1024, 3), num_classes=1, learning_rate=0.0001):
    inputs = Input(shape=input_shape)
    # 1024

    down0b = Conv2D(8, (3, 3), padding='same')(inputs)
    down0b = BatchNormalization()(down0b)
    down0b = Activation('relu')(down0b)
    down0b = Conv2D(8, (3, 3), padding='same')(down0b)
    down0b = BatchNormalization()(down0b)
    down0b = Activation('relu')(down0b)
    down0b_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0b)
    # 512

    down0a = Conv2D(16, (3, 3), padding='same')(down0b_pool)
    down0a = BatchNormalization()(down0a)
    down0a = Activation('relu')(down0a)
    down0a = Conv2D(16, (3, 3), padding='same')(down0a)
    down0a = BatchNormalization()(down0a)
    down0a = Activation('relu')(down0a)
    down0a_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0a)
    # 256

    down0 = Conv2D(32, (3, 3), padding='same')(down0a_pool)
    down0 = BatchNormalization()(down0)
    down0 = Activation('relu')(down0)
    down0 = Conv2D(32, (3, 3), padding='same')(down0)
    down0 = BatchNormalization()(down0)
    down0 = Activation('relu')(down0)
    down0_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0)
    # 128

    down1 = Conv2D(64, (3, 3), padding='same')(down0_pool)
    down1 = BatchNormalization()(down1)
    down1 = Activation('relu')(down1)
    down1 = Conv2D(64, (3, 3), padding='same')(down1)
    down1 = BatchNormalization()(down1)
    down1 = Activation('relu')(down1)
    down1_pool = MaxPooling2D((2, 2), strides=(2, 2))(down1)
    # 64

    down2 = Conv2D(128, (3, 3), padding='same')(down1_pool)
    down2 = BatchNormalization()(down2)
    down2 = Activation('relu')(down2)
    down2 = Conv2D(128, (3, 3), padding='same')(down2)
    down2 = BatchNormalization()(down2)
    down2 = Activation('relu')(down2)
    down2_pool = MaxPooling2D((2, 2), strides=(2, 2))(down2)
    # 32

    down3 = Conv2D(256, (3, 3), padding='same')(down2_pool)
    down3 = BatchNormalization()(down3)
    down3 = Activation('relu')(down3)
    down3 = Conv2D(256, (3, 3), padding='same')(down3)
    down3 = BatchNormalization()(down3)
    down3 = Activation('relu')(down3)
    down3_pool = MaxPooling2D((2, 2), strides=(2, 2))(down3)
    # 16

    down4 = Conv2D(512, (3, 3), padding='same')(down3_pool)
    down4 = BatchNormalization()(down4)
    down4 = Activation('relu')(down4)
    down4 = Conv2D(512, (3, 3), padding='same')(down4)
    down4 = BatchNormalization()(down4)
    down4 = Activation('relu')(down4)
    down4_pool = MaxPooling2D((2, 2), strides=(2, 2))(down4)
    # 8

    center = Conv2D(1024, (3, 3), padding='same')(down4_pool)
    center = BatchNormalization()(center)
    center = Activation('relu')(center)
    center = Conv2D(1024, (3, 3), padding='same')(center)
    center = BatchNormalization()(center)
    center = Activation('relu')(center)
    # center

    up4 = UpSampling2D((2, 2))(center)
    up4 = concatenate([down4, up4], axis=3)
    up4 = Conv2D(512, (3, 3), padding='same')(up4)
    up4 = BatchNormalization()(up4)
    up4 = Activation('relu')(up4)
    up4 = Conv2D(512, (3, 3), padding='same')(up4)
    up4 = BatchNormalization()(up4)
    up4 = Activation('relu')(up4)
    up4 = Conv2D(512, (3, 3), padding='same')(up4)
    up4 = BatchNormalization()(up4)
    up4 = Activation('relu')(up4)
    # 16

    up3 = UpSampling2D((2, 2))(up4)
    up3 = concatenate([down3, up3], axis=3)
    up3 = Conv2D(256, (3, 3), padding='same')(up3)
    up3 = BatchNormalization()(up3)
    up3 = Activation('relu')(up3)
    up3 = Conv2D(256, (3, 3), padding='same')(up3)
    up3 = BatchNormalization()(up3)
    up3 = Activation('relu')(up3)
    up3 = Conv2D(256, (3, 3), padding='same')(up3)
    up3 = BatchNormalization()(up3)
    up3 = Activation('relu')(up3)
    # 32

    up2 = UpSampling2D((2, 2))(up3)
    up2 = concatenate([down2, up2], axis=3)
    up2 = Conv2D(128, (3, 3), padding='same')(up2)
    up2 = BatchNormalization()(up2)
    up2 = Activation('relu')(up2)
    up2 = Conv2D(128, (3, 3), padding='same')(up2)
    up2 = BatchNormalization()(up2)
    up2 = Activation('relu')(up2)
    up2 = Conv2D(128, (3, 3), padding='same')(up2)
    up2 = BatchNormalization()(up2)
    up2 = Activation('relu')(up2)
    # 64

    up1 = UpSampling2D((2, 2))(up2)
    up1 = concatenate([down1, up1], axis=3)
    up1 = Conv2D(64, (3, 3), padding='same')(up1)
    up1 = BatchNormalization()(up1)
    up1 = Activation('relu')(up1)
    up1 = Conv2D(64, (3, 3), padding='same')(up1)
    up1 = BatchNormalization()(up1)
    up1 = Activation('relu')(up1)
    up1 = Conv2D(64, (3, 3), padding='same')(up1)
    up1 = BatchNormalization()(up1)
    up1 = Activation('relu')(up1)
    # 128

    up0 = UpSampling2D((2, 2))(up1)
    up0 = concatenate([down0, up0], axis=3)
    up0 = Conv2D(32, (3, 3), padding='same')(up0)
    up0 = BatchNormalization()(up0)
    up0 = Activation('relu')(up0)
    up0 = Conv2D(32, (3, 3), padding='same')(up0)
    up0 = BatchNormalization()(up0)
    up0 = Activation('relu')(up0)
    up0 = Conv2D(32, (3, 3), padding='same')(up0)
    up0 = BatchNormalization()(up0)
    up0 = Activation('relu')(up0)
    # 256

    up0a = UpSampling2D((2, 2))(up0)
    up0a = concatenate([down0a, up0a], axis=3)
    up0a = Conv2D(16, (3, 3), padding='same')(up0a)
    up0a = BatchNormalization()(up0a)
    up0a = Activation('relu')(up0a)
    up0a = Conv2D(16, (3, 3), padding='same')(up0a)
    up0a = BatchNormalization()(up0a)
    up0a = Activation('relu')(up0a)
    up0a = Conv2D(16, (3, 3), padding='same')(up0a)
    up0a = BatchNormalization()(up0a)
    up0a = Activation('relu')(up0a)
    # 512

    up0b = UpSampling2D((2, 2))(up0a)
    up0b = concatenate([down0b, up0b], axis=3)
    up0b = Conv2D(8, (3, 3), padding='same')(up0b)
    up0b = BatchNormalization()(up0b)
    up0b = Activation('relu')(up0b)
    up0b = Conv2D(8, (3, 3), padding='same')(up0b)
    up0b = BatchNormalization()(up0b)
    up0b = Activation('relu')(up0b)
    up0b = Conv2D(8, (3, 3), padding='same')(up0b)
    up0b = BatchNormalization()(up0b)
    up0b = Activation('relu')(up0b)
    # 1024

    classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(up0b)

    model = Model(inputs=inputs, outputs=classify)

    model.compile(optimizer=RMSprop(lr=learning_rate), loss=make_loss('bce_dice'), metrics=[dice_coef, 'accuracy'])

    if pretrained_weights:
        model.load_weights(pretrained_weights)

    return model

现在我要修改问题并使其成为多类分类器,所以我不再使用掩码,而是使用两个。所以我有两种类型的 grasycale 口罩(同一火车 img 的Mca_maskNotMca_mask),在这种情况下,标准做法是什么?将两个掩码合二为一?

【问题讨论】:

  • 据我从您的代码中了解到,您只使用一个类进行“分类”,在您的情况下是“掩码”,对吧?我认为您误解了图像分类概念。您能否解释一下您的模型要解决的问题是什么?
  • 我试图弄清楚如何为多类问题组织掩码。据我了解,我的 RGB 图像如此组织:[1,1,1] -> Dog [2,2,2] -> Cat [0,0,0] -> Unknown 我必须将它们转换为如下向量:[ 1,0] = 狗,[0,1] = 猫,[0,0] = 未知 对吗?有没有一种快速的方法,一个命令或类似的东西?我不想为每个图像迭代每个像素

标签: python tensorflow machine-learning keras semantic-segmentation


【解决方案1】:

在这一行我们可以看到你的输出层正在应用 sigmoid:

classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(up0b)

这意味着您的所有输出都转换为[0,1] 之间,而它们之间没有任何依赖关系。这就是你想要的多类分类。作为旁注,将输出层转换为[0,1] 范围的另一种常见方法是应用 softmax - 这对多类不利,因为随着一个类的信心增长,其他类必然会减少。

您的损失函数在这一行被定义为二元交叉熵:

model.compile(optimizer=RMSprop(lr=learning_rate), loss=make_loss('bce_dice'), metrics=[dice_coef, 'accuracy'])

这适用于所有类型的分类(单类或多类),并且需要[0,1] 范围内的输出。

所以基本上你已经准备好按照你现在的配置进行多类分类了。您需要做的就是创建多类标签。例如,如果您的类是狗、猫、鸟、马、山羊,并且图像中包含狗和猫,则您的标签将为 [1, 1, 0, 0, 0],您可以按原样训练网络。

【讨论】:

  • 谢谢,我更正了错误!有问题的问题有两个类,所以我用以下方式创建了掩码: RGB 图像对于每个像素,如果它代表一只猫:[1, 1, 1] 如果它代表一只狗:[2, 2, 2] .有没有更快更简洁的方法来为狗创建 [0,1] 形式的向量,为猫创建 [1,0] 形式的向量?
  • 这不是识别类的最佳方法,而且它肯定不适用于需要[0,1] 范围内的输出的交叉熵损失。如果你在做像素级分类,你不应该有一个 RGB 图像,你应该有一个用于类标签的 1 通道(例如灰度图像)。你应该做的是每个班级有 1 个频道。例如。如果您有 5 个类,则您有一个具有 5 个“颜色通道”的图像。每个通道将是像素类别的[0,1] 值。如果您的输出图像是 32x32,那么您的标签将是 32x32x5 的 5 个标签。
  • 这是一个关于这个主题的漂亮教程,可能会有所帮助:medium.com/nanonets/…
猜你喜欢
  • 2022-01-21
  • 2017-12-24
  • 2020-01-25
  • 2020-04-13
  • 2019-06-28
  • 1970-01-01
  • 2021-04-15
  • 2021-08-16
  • 2019-09-08
相关资源
最近更新 更多