【问题标题】:Keras ModelCheckpoint is introducing additional layers while saving modelKeras ModelCheckpoint 在保存模型的同时引入了额外的层
【发布时间】:2020-04-26 10:53:36
【问题描述】:

我正在尝试使用 Keras 中的 ModelCheckpoint 保存模型。我使用以下代码 sn-p 保存模型。

model = load_vgg()
parallel_model = keras.utils.multi_gpu_model(model_1, gpus=2)
parallel_model.compile(loss="binary_crossentropy", metrics=['accuracy'], optimizer=Adam())
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='min')
checkpoint = ModelCheckpoint(os.path.join(ouput_dir, "model.h5"), monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
history = parallel_model.fit_generator(train_gen, steps_per_epoch=math.ceil(num_train_samples / batch_size), validation_data=val_gen, validation_steps=math.ceil(num_val_samples / batch_size), epochs=200, verbose=1, class_weight=class_weights, callbacks=[checkpoint, early_stopping])
model.save(os.path.join(ouput_dir, 'model_2.h5'))

使用以下代码定义模型:

def load_vgg(in_shape=(x, y), n_classes=1, n_stages_per_blocks=[2, 2, 2, 2, 2]):
  in_layer = keras.layers.Input(in_shape)
  block1 = _block(in_layer, 64, n_stages_per_blocks[0])
  pool1 = keras.layers.MaxPool1D()(block1)
  block2 = _block(pool1, 128, n_stages_per_blocks[1])
  pool2 = keras.layers.MaxPool1D()(block2)
  block3 = _block(pool2, 256, n_stages_per_blocks[2])
  pool3 = keras.layers.MaxPool1D()(block3)
  block4 = _block(pool3, 512, n_stages_per_blocks[3])
  pool4 = keras.layers.MaxPool1D()(block4)
  block5 = _block(pool4, 512, n_stages_per_blocks[4])
  pool5 = keras.layers.MaxPool1D()(block5)
  flattened = keras.layers.Flatten()(pool5)
  dense1 = keras.layers.Dense(2048, activation='relu')(flattened)
  dense2 = keras.layers.Dense(1024, activation='relu')(dense1)
  preds = keras.layers.Dense(n_classes, activation='sigmoid')(dense2)
  model = keras.models.Model(in_layer, preds)
  model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
  return model

convBlock = partial(keras.layers.Conv1D, kernel_size=3, strides=1, padding='same', activation='relu')

def _block(in_tensor, filters, n_convs):
  conv_block = in_tensor
  for _ in range(n_convs):
    conv_block = convBlock(filters=filters)(conv_block)
  return conv_block

问题:当我们加载使用 ModelCheckpoint 保存的模型和使用 save 函数直接保存的模型时,它们为我们提供了不同的模型摘要。

使用 ModelCheckpoint 保存的模型摘要: 使用模型的保存功能保存的模型总结:

为什么 ModelCheckpoint 会引入三个附加层并将模型移动到 model_1 层?我必须进行哪些更改才能确保 ModelCheckpoint 保存的模型与使用 save 函数获得的模型具有相同的结构?任何帮助都感激不尽。如果您需要任何其他信息,请告诉我。

【问题讨论】:

    标签: python python-3.x keras deep-learning


    【解决方案1】:

    根据 Keras 文档:

    要保存多 GPU 模型,请使用 .save(fname).save_weights(fname) 使用模板模型(您传递给 multi_gpu_model 的参数), 而不是 multi_gpu_model 返回的模型。

    当我们使用 ModelCheckpoint 时,也会出现同样的问题。在 GPU 模型上调用回调,这是不正确的。

    有两种解决方案:1)您实现一个 ModelCheckpoint 版本,其中您将模板模型作为参数传递(下面提供的代码),或 2)您遵循此 suggestion 实现一个类,以确保任何对保存函数的调用都将使用模板模型。

    multi_gpu_model 的 ModelCheckpoint 实现:

    import keras
    import numpy as np 
    import warnings
    
    class ModelCheckpoint(keras.callbacks.Callback):
    
        def __init__(self, filepath, ser_model, monitor='val_loss', verbose=0,
                     save_best_only=False, save_weights_only=False,
                     mode='auto', period=1):
            super(ModelCheckpoint, self).__init__()
            self.monitor = monitor
            self.verbose = verbose
            self.filepath = filepath
            self.save_best_only = save_best_only
            self.save_weights_only = save_weights_only
            self.period = period
            self.epochs_since_last_save = 0
            self.ser_model = ser_model
    
            if mode not in ['auto', 'min', 'max']:
                warnings.warn('ModelCheckpoint mode %s is unknown, '
                              'fallback to auto mode.' % (mode),
                              RuntimeWarning)
                mode = 'auto'
    
            if mode == 'min':
                self.monitor_op = np.less
                self.best = np.Inf
            elif mode == 'max':
                self.monitor_op = np.greater
                self.best = -np.Inf
            else:
                if 'acc' in self.monitor or self.monitor.startswith('fmeasure'):
                    self.monitor_op = np.greater
                    self.best = -np.Inf
                else:
                    self.monitor_op = np.less
                    self.best = np.Inf
    
        def on_epoch_end(self, epoch, logs=None):
            logs = logs or {}
            self.epochs_since_last_save += 1
            if self.epochs_since_last_save >= self.period:
                self.epochs_since_last_save = 0
                filepath = self.filepath.format(epoch=epoch + 1, **logs)
                if self.save_best_only:
                    current = logs.get(self.monitor)
                    if current is None:
                        warnings.warn('Can save best model only with %s available, '
                                      'skipping.' % (self.monitor), RuntimeWarning)
                    else:
                        if self.monitor_op(current, self.best):
                            if self.verbose > 0:
                                print('\nEpoch %05d: %s improved from %0.5f to %0.5f,'
                                      ' saving model to %s'
                                      % (epoch + 1, self.monitor, self.best,
                                         current, filepath))
                            self.best = current
                            if self.save_weights_only:
                                self.ser_model.save_weights(filepath, overwrite=True)
                            else:
                                self.ser_model.save(filepath, overwrite=True)
                        else:
                            if self.verbose > 0:
                                print('\nEpoch %05d: %s did not improve from %0.5f' %
                                      (epoch + 1, self.monitor, self.best))
                else:
                    if self.verbose > 0:
                        print('\nEpoch %05d: saving model to %s' % (epoch + 1, filepath))
                    if self.save_weights_only:
                        self.ser_model.save_weights(filepath, overwrite=True)
                    else:
                        self.ser_model.save(filepath, overwrite=True)
    

    请随时提出您的任何 cmets 和建议!

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2019-01-17
      • 2018-06-16
      • 2022-10-14
      • 2020-02-26
      • 1970-01-01
      • 2020-09-28
      相关资源
      最近更新 更多