【问题标题】:Customized TF2 Model Save自定义 TF2 模型保存
【发布时间】:2020-05-11 15:08:37
【问题描述】:

我用 TF2 编写自定义模型

class NN(tf.keras.Model):

def __init__(self,
             output_dim: int, 
             controller_dime:int=128,
             interface_dim: int=35,
             netsize: int=100, 
             degree: int=20, 
             k:float=2,
             name:str='dnc_rn')->None:

它充满了不可训练的随机参数! 所以我需要完全保存模型,我不能使用 save_weights 因为每个模型的训练取决于它的自随机参数......


trainer的文件是这样的:

import numpy as np
import tensorflow as tf

def trainer(model: tf.keras.Model,
        loss_fn: tf.keras.losses,
        X_train: np.ndarray,
        y_train: np.ndarray = None,
        optimizer: tf.keras.optimizers = tf.keras.optimizers.Adam(learning_rate=1e-3),
        loss_fn_kwargs: dict = None,
        epochs: int = 1000000,
        batch_size: int = 1,
        buffer_size: int = 2048,
        shuffle: bool = False,
        verbose: bool = True,
        show_model_interface_vector: bool = False
        ) -> None:

"""
Train TensorFlow model.

Parameters
----------
model
    Model to train.
loss_fn
    Loss function used for training.
X_train
    Training batch.
y_train
    Training labels.
optimizer
    Optimizer used for training.
loss_fn_kwargs
    Kwargs for loss function.
epochs
    Number of training epochs.
batch_size
    Batch size used for training.
buffer_size
    Maximum number of elements that will be buffered when prefetching.
shuffle
    Whether to shuffle training data.
verbose
    Whether to print training progress.
"""

model.show_interface_vector=show_model_interface_vector

# Create dataset
if y_train is None:  # Unsupervised model
    train_data = X_train
else:
    train_data = (X_train, y_train)
train_data = tf.data.Dataset.from_tensor_slices(train_data)
if shuffle:
    train_data = train_data.shuffle(buffer_size=buffer_size).batch(batch_size)

# Iterate over epochs
history=[]
for epoch in range(epochs):
    if verbose:
        pbar = tf.keras.utils.Progbar(target=epochs, width=40, verbose=1, interval=0.05)

    # Iterate over the batches of the dataset
    for step, train_batch in enumerate(train_data):

        if y_train is None:
            X_train_batch = train_batch
        else:
            X_train_batch, y_train_batch = train_batch

        with tf.GradientTape() as tape:
            preds = model(X_train_batch)

            if y_train is None:
                ground_truth = X_train_batch
            else:
                ground_truth = y_train_batch

            # Compute loss
            if tf.is_tensor(preds):
                args = [ground_truth, preds]
            else:
                args = [ground_truth] + list(preds)

            if loss_fn_kwargs:
                loss = loss_fn(*args, **loss_fn_kwargs)
            else:
                loss = loss_fn(*args)

            if model.losses:  # Additional model losses
                loss += sum(model.losses)

        grads = tape.gradient(loss, model.trainable_weights)
        optimizer.apply_gradients(zip(grads, model.trainable_weights))

    if verbose:
            loss_val = loss.numpy().mean()
            pbar_values = [('loss', loss_val)]
            pbar.update(epoch+1, values=pbar_values)

    history.append(loss.numpy().mean())

model.show_interface_vector= not show_model_interface_vector
return history

训练后我尝试保存模型,但是当我调用 TF2 .save 时:

model.save('a.h5')

我有一个错误:

NotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using `save_weights`.

我将其更改为 .tf 格式,但又一次:

ValueError: Model <model2.NN object at 0x11448b390> cannot be saved because the input shapes have not been set. Usually, input shapes are automatically determined from calling .fit() or .predict(). To manually set the shapes, call model._set_inputs(inputs).

但它已经训练过了,如果我 _set_inputs

ValueError: Cannot infer num from shape (None, 12, 4)

我不知道该怎么办? 我是 TF2 的科学和业余爱好者 帮帮我,这对我的项目很重要...

【问题讨论】:

标签: tensorflow keras tensorflow2.0 tf.keras tensorflow2.x


【解决方案1】:

第一个错误表明您不能将 model.save 用于子类模型。子类模型的类型是:您定义一个继承自 tf.keras.models.Model 的类。如错误提示,请尝试使用functionalsequential api 构建模型,以便您可以保存为h5 格式。

【讨论】:

  • 如何重写我的模型文件-我阅读了链接-?因为该模型是从 GOOGLE 的 DNC 论文中给出的,并且还定制了并且充满了数据结构,所以我无法用上层方法来实现它(我并不缺乏内存,但它需要很多时间来训练(数百万个 epoch 和数千个简单)
  • 不幸的是,有些模型由于其复杂性而仅使用子类化 API 实现。我过去曾尝试转换一些像 XLNet 这样的大数据但没有成功,因为错误出现了错误。我强烈建议您寻找其他模型或寻找另一种保存其参数的方法。
  • 您可以尝试的另一件事是使用官方 repo 中的 tf1 训练您的模型(我想它有一个)
【解决方案2】:

我找到了答案并使用:

import pickle
import dill

dill.dump(model, file = open("model.pickle", "wb"))
model_reloaded = dill.load(open("model.pickle", "rb"))

来自这个话题: Saving an Object (Data persistence)

【讨论】:

    【解决方案3】:

    如果您使用自定义的复杂模型,这意味着您要创建优化器、计算梯度并将梯度应用到某些复杂部分,则 tf.keras.Model.save 不适合,尤其是当输入的形状未定义时(只是我思考)。

    所以tf.train.Checkpoint API 适合这种情况。点击教程链接。 tf.train.Checkpoint 可以同时保存模型和优化器。使用方式 tf.train.Checkpoint 与 tf.compat.v1.train.Saver 的使用方式类似。因此,它可能是将代码从 tensorflow1 迁移到 tensorflow2 的替代方案。

    【讨论】:

      猜你喜欢
      • 2019-11-01
      • 2021-09-19
      • 1970-01-01
      • 1970-01-01
      • 2021-02-19
      • 2020-09-28
      • 2014-07-11
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多