【问题标题】:Custom Accuracy/Loss for each Output in Multiple Output Model in KerasKeras 多输出模型中每个输出的自定义准确度/损失
【发布时间】:2019-01-10 19:11:23
【问题描述】:

我正在尝试使用 Keras 为两个输出神经网络模型中的每个输出定义自定义损失和准确度函数。我们称这两个输出为:A 和 B。

我的目标是:

  1. 为其中一个输出名称提供准确度/损失函数,以便它们可以在 tensorboard 中的相同图表上报告,作为我已经放置的旧/现有模型的相同相应输出。因此,例如,这两个输出网络中输出 A 的准确度和损失应该可以在 tensorboard 的同一张图中看到,就像我拥有的​​一些旧模型的输出 A 一样。更具体地说,这些旧型号都输出A_output_accval_A_output_accA_output_lossval_A_output_loss。因此,我希望这个新模型中 A 输出的相应指标读数也具有这些名称,以便它们在 tensorboard 中的同一图表上可见/可比较。
  2. 允许轻松配置准确度/损失函数,以便我可以随心所欲地为每个输出交换不同的损失/准确度,而无需对其进行硬编码。

我有一个构造和编译网络的Modeler 类。相关代码如下。

class Modeler(BaseModeler):
  def __init__(self, loss=None,accuracy=None, ...):
    """
    Returns compiled keras model.  

    """
    self.loss = loss
    self.accuracy = accuracy
    model = self.build()

    ...

    model.compile(
        loss={ # we are explicit here and name the outputs even though in this case it's not necessary
            "A_output": self.A_output_loss(),#loss,
            "B_output": self.B_output_loss()#loss
        },
        optimizer=optimus,
        metrics= { # we need to tie each output to a specific list of metrics
            "A_output": [self.A_output_acc()],
                            # self.A_output_loss()], # redundant since it's already reported via `loss` param,
                                                        # ends up showing up as `A_output_loss_1` since keras
                                                        # already reports `A_output_loss` via loss param
            "B_output": [self.B_output_acc()]
                            # self.B_output_loss()]  # redundant since it's already reported via `loss` param
                                                        # ends up showing up as `B_output_loss_1` since keras
                                                        # already reports `B_output_loss` via loss param
        })

    self._model = model


  def A_output_acc(self):
    """
    Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
    'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
    other models that same outputs.

    :return:    accuracy metric
    """

    acc = None
    if self.accuracy == TypedAccuracies.BINARY:
        def acc(y_true, y_pred):
            return self.binary_accuracy(y_true, y_pred)
    elif self.accuracy == TypedAccuracies.DICE:
        def acc(y_true, y_pred):
            return self.dice_coef(y_true, y_pred)
    elif self.accuracy == TypedAccuracies.JACARD:
        def acc(y_true, y_pred):
            return self.jacard_coef(y_true, y_pred)
    else:
        logger.debug('ERROR: undefined accuracy specified: {}'.format(self.accuracy))

    return acc


  def A_output_loss(self):
    """
    Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
    'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
    other models that same outputs.

    :return:    loss metric
    """

    loss = None
    if self.loss == TypedLosses.BINARY_CROSSENTROPY:
        def loss(y_true, y_pred):
            return self.binary_crossentropy(y_true, y_pred)
    elif self.loss == TypedLosses.DICE:
        def loss(y_true, y_pred):
            return self.dice_coef_loss(y_true, y_pred)
    elif self.loss == TypedLosses.JACARD:
        def loss(y_true, y_pred):
            return self.jacard_coef_loss(y_true, y_pred)
    else:
        logger.debug('ERROR: undefined loss specified: {}'.format(self.accuracy))

    return loss


  def B_output_acc(self):
    """
    Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
    'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
    other models that same outputs.

    :return:    accuracy metric
    """

    acc = None
    if self.accuracy == TypedAccuracies.BINARY:
        def acc(y_true, y_pred):
            return self.binary_accuracy(y_true, y_pred)
    elif self.accuracy == TypedAccuracies.DICE:
        def acc(y_true, y_pred):
            return self.dice_coef(y_true, y_pred)
    elif self.accuracy == TypedAccuracies.JACARD:
        def acc(y_true, y_pred):
            return self.jacard_coef(y_true, y_pred)
    else:
        logger.debug('ERROR: undefined accuracy specified: {}'.format(self.accuracy))

    return acc


  def B_output_loss(self):
    """
    Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
    'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
    other models that same outputs.

    :return:    loss metric
    """

    loss = None
    if self.loss == TypedLosses.BINARY_CROSSENTROPY:
        def loss(y_true, y_pred):
            return self.binary_crossentropy(y_true, y_pred)
    elif self.loss == TypedLosses.DICE:
        def loss(y_true, y_pred):
            return self.dice_coef_loss(y_true, y_pred)
    elif self.loss == TypedLosses.JACARD:
        def loss(y_true, y_pred):
            return self.jacard_coef_loss(y_true, y_pred)
    else:
        logger.debug('ERROR: undefined loss specified: {}'.format(self.accuracy))

    return loss


  def load_model(self, model_path=None):
    """
    Returns built model from model_path assuming using the default architecture.

    :param model_path:   str, path to model file
    :return:             defined model with weights loaded
    """

    custom_objects = {'A_output_acc': self.A_output_acc(),
                      'A_output_loss': self.A_output_loss(),
                      'B_output_acc': self.B_output_acc(),
                      'B_output_loss': self.B_output_loss()}
    self.model = load_model(filepath=model_path, custom_objects=custom_objects)
    return self


  def build(self, stuff...):
    """
    Returns model architecture.  Instead of just one task, it performs two: A and B.

    :return:            model
    """

    ...

    A_conv_final = Conv2D(1, (1, 1), activation="sigmoid", name="A_output")(up_conv_224)

    B_conv_final = Conv2D(1, (1, 1), activation="sigmoid", name="B_output")(up_conv_224)

    model = Model(inputs=[input], outputs=[A_conv_final, B_conv_final], name="my_model")
    return model

训练效果很好。但是,当我稍后去加载模型进行推理时,使用上面的 load_model() 函数,Keras 抱怨它不知道我给它的自定义指标:

ValueError: Unknown loss function:loss

似乎正在发生的事情是,Keras 将在上面的每个自定义度量函数(def loss(...)def acc(...))中创建的返回函数附加到model.compile()metrics 参数中给出的字典键中称呼。 因此,例如键是A_output,我们为它调用自定义精度函数A_output_acc(),它返回一个名为acc 的函数。所以结果是A_output + acc = A_output_acc。这意味着我不能命名那些返回的函数:acc/loss 别的东西,因为这会弄乱报告/图表。 这一切都很好,但是我不知道如何使用正确定义的 custom_objects 参数(或为此定义/命名我的自定义指标函数)编写我的 load 函数,以便 Keras 知道哪个自定义精度/每个输出头都要加载损失函数。

更重要的是,它似乎需要load_model() 中以下形式的custom_objects 字典(由于显而易见的原因,这将不起作用):

custom_objects = {'acc': self.A_output_acc(),
                  'loss': self.A_output_loss(),
                  'acc': self.B_output_acc(),
                  'loss': self.B_output_loss()}

代替:

custom_objects = {'A_output_acc': self.A_output_acc(),
                  'A_output_loss': self.A_output_loss(),
                  'B_output_acc': self.B_output_acc(),
                  'B_output_loss': self.B_output_loss()}

有任何见解或解决方法吗?

谢谢!

编辑:

我已经确认上述关于键/函数名称连接的推理对于 Keras 的 model.compile() 调用的 metrics 参数是正确的。然而,对于model.compile() 中的loss 参数,Keras 只是将键与单词loss 连接起来,但期望custom_objects 参数中的自定义损失函数的名称model.load_model()...如图所示。

【问题讨论】:

标签: python tensorflow keras


【解决方案1】:

删除损失和指标末尾的 (),应该就是这样。它看起来像这样

loss={ 
       "A_output": self.A_output_loss,
       "B_output": self.B_output_loss
      }

【讨论】:

    猜你喜欢
    • 2019-06-19
    • 1970-01-01
    • 2019-01-11
    • 2019-01-23
    • 2020-04-12
    • 1970-01-01
    • 2018-06-28
    • 2020-04-12
    • 1970-01-01
    相关资源
    最近更新 更多