【问题标题】:Freezing TensorFlow2 layers冻结 TensorFlow2 层
【发布时间】:2020-08-16 12:49:14
【问题描述】:

我有一个用于 MNIST 数据集的 LeNet-300-100 密集神经网络,我想冻结前两个隐藏层中具有 300 和 100 个隐藏神经元的前两层。我只想训练输出层。我必须这样做的代码如下:

from tensorflow import keras

inner_model = keras.Sequential(
    [
        keras.Input(shape=(1024,)),
        keras.layers.Dense(300, activation="relu", kernel_initializer = tf.initializers.GlorotNormal()),
        keras.layers.Dense(100, activation="relu", kernel_initializer = tf.initializers.GlorotNormal()),
    ]
)

model_mnist = keras.Sequential(
    [keras.Input(shape=(1024,)), inner_model, keras.layers.Dense(10, activation="softmax"),]
)

# model_mnist.trainable = True  # Freeze the outer model
# Freeze the inner model-
inner_model.trainable = False


# Sanity check-
inner_model.trainable, model_mnist.trainable
# (False, True)

# Compile NN-
model_mnist.compile(
    loss=tf.keras.losses.categorical_crossentropy,
    # optimizer='adam',
    optimizer=tf.keras.optimizers.Adam(lr = 0.0012),
    metrics=['accuracy'])
    

但是,这段代码似乎并没有冻结前两个隐藏层,它们也在学习。我做错了什么?

谢谢!

【问题讨论】:

    标签: neural-network tensorflow2.0


    【解决方案1】:

    解决方案:在定义神经网络模型时使用“可训练”参数来冻结模型的所需层,如下所示-

    model = Sequential()
    
    model.add(Dense(units = 300, activation="relu", kernel_initializer = tf.initializers.GlorotNormal(), trainable = False))
    
    model.add(Dense(units = 100, activation = "relu", kernel_initializer = tf.initializer.GlorotNormal(), trainable = False))
    
    model.add(Dense(units = 10, activation = "softmax"))
    
    # Compile model as usual
    

    【讨论】:

      猜你喜欢
      • 2020-01-26
      • 2020-05-29
      • 1970-01-01
      • 2020-09-24
      • 1970-01-01
      • 1970-01-01
      • 2017-11-23
      • 1970-01-01
      • 2022-07-25
      相关资源
      最近更新 更多