【问题标题】:Regularization in Keras: How can I control the maximum number of zero weights on the last layer?Keras 中的正则化:如何控制最后一层的最大零权重数?
【发布时间】:2021-06-05 01:48:59
【问题描述】:

我有一个神经网络,最后一层输出一个大小为 N (N=8) 的向量。在进行多标签分类时,我发现大多数输出​​向量元素都等于 0,最多有两个元素等于 1。例如 y_pred == [1, 0, 0, 0, 0, 0, 0, 1]

我想告诉我的网络,即说至少 N-2 个输出权重等于 0。

我目前的模型如下:

ResNet18, preprocess_input = Classifiers.get('resnet18')
resnet = ResNet18((im_size, im_size, 3), weights='imagenet', include_top=False)
headModel = keras.layers.pooling.AveragePooling2D(pool_size=(3,3))(resnet.output)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(256, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
    
# 'sigmoid' parameter indicating that we’ll be performing multi-label classification.
headModel = Dense(8, activation="sigmoid")(headModel)

我正在考虑将正则化器 my_reg 添加到我的最后一个 Dense 层,类似于类似的东西

headModel = Dense(8, activation="sigmoid", kernel_regularizer=my_reg)(headModel)

我没有使用 Keras 中的正则化器以及如何操作权重的经验。

【问题讨论】:

    标签: python tensorflow keras deep-learning constraints


    【解决方案1】:

    您可以将自定义函数作为激活函数。更具体地说,将两个最小的概率设置为零。

    def custom_func(x):
        second_smallest = tf.sort(tf.squeeze(x))[1]
        x = tf.where(second_smallest >= x, tf.zeros_like(x), x)
        return x
    
    import numpy as np
    import tensorflow as tf
    
    inp = tf.keras.Input(shape=(224, 224, 3))
    base = tf.keras.applications.MobileNetV2(include_top=False, 
                                             input_shape=(224, 224, 3))(inp)
    gap = tf.keras.layers.GlobalAveragePooling2D()(base)
    out = tf.keras.layers.Dense(8, activation='sigmoid')(gap)
    custom_function = tf.keras.layers.Lambda(custom_func)(out)
    
    model = tf.keras.Model(inp, custom_function)
    
    model(np.random.rand(1, 224, 224, 3).astype(np.float32))
    
    <tf.Tensor: shape=(1, 8), dtype=float32, numpy=
    array([[0.36225533, 0.66996753, 0.9467776 , 0.        , 0.6429986 ,
            0.9498544 , 0.        , 0.6883256 ]], dtype=float32)>
    

    你也可以让它接受这样的参数:

    import numpy as np
    import tensorflow as tf
    
    
    def custom_func(inputs, n_to_zero):
        second_smallest = tf.sort(tf.squeeze(inputs))[n_to_zero - 1]
        out = tf.where(second_smallest >= inputs, tf.zeros_like(inputs), inputs)
        return out
    
    
    inp = tf.keras.Input(shape=(224, 224, 3))
    base = tf.keras.applications.MobileNetV2(include_top=False, 
                                             input_shape=(224, 224, 3))(inp)
    gap = tf.keras.layers.GlobalAveragePooling2D()(base)
    out = tf.keras.layers.Dense(8, activation='sigmoid')(gap)
    custom_function = tf.keras.layers.Lambda(
        lambda x: custom_func(inputs=x, n_to_zero=4)
                                            )(out)
    
    model = tf.keras.Model(inp, custom_function)
    
    model(np.random.rand(1, 224, 224, 3).astype(np.float32))
    
    <tf.Tensor: shape=(1, 8), dtype=float32, numpy=
    array([[0.8537902, 0.       , 0.       , 0.       , 0.7386258, 0.       ,
            0.0948523, 0.7973974]], dtype=float32)>
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2019-06-21
      • 1970-01-01
      • 2018-07-26
      • 1970-01-01
      • 2020-09-12
      • 2017-12-02
      相关资源
      最近更新 更多