【问题标题】:Training of multi-output Keras model on a joint loss function在联合损失函数上训练多输出 Keras 模型
【发布时间】:2019-11-21 15:00:14
【问题描述】:

我正在 Keras 中编写两个联合解码器,一个公共输入,两个单独的输出,以及一个将两个输出都考虑在内的损失函数。我遇到的问题是损失函数。

这是您可以重现错误的最小 Keras 代码:

import tensorflow as tf
from scat import *

from keras.layers import Input, Reshape, Permute, Lambda, Flatten
from keras.layers.core import Dense
from keras.layers.advanced_activations import LeakyReLU
from keras.models import Model
from keras import backend as K

def identity(x):
    return K.identity(x)

# custom loss function
def custom_loss():
    def my_loss(y_dummy, pred):
        fcn_loss_1 = tf.nn.softmax_cross_entropy_with_logits(labels=y_dummy[0], logits=pred[0])
        fcn_loss_2 = tf.nn.softmax_cross_entropy_with_logits(labels=y_dummy[1], logits=pred[1])
        fcn_loss_2 = tf.matrix_band_part(fcn_loss_2, 0, -1) - tf.matrix_band_part(fcn_loss_2, 0, 0)

        fcn_loss = tf.reduce_mean(fcn_loss_1) + 2 * tf.reduce_mean(fcn_loss_2)

        return fcn_loss
    return my_loss

def keras_version():
    input = Input(shape=(135,), name='feature_input')
    out1 = Dense(128, kernel_initializer='glorot_normal', activation='linear')(input)
    out1 = LeakyReLU(alpha=.2)(out1)
    out1 = Dense(256, kernel_initializer='glorot_normal', activation='linear')(out1)
    out1 = LeakyReLU(alpha=.2)(out1)
    out1 = Dense(512, kernel_initializer='glorot_normal', activation='linear')(out1)
    out1 = LeakyReLU(alpha=.2)(out1)
    out1 = Dense(45, kernel_initializer='glorot_normal', activation='linear')(out1)
    out1 = LeakyReLU(alpha=.2)(out1)
    out1 = Reshape((9, 5))(out1)

    out2 = Dense(128, kernel_initializer='glorot_normal', activation='linear')(input)
    out2 = LeakyReLU(alpha=.2)(out2)
    out2 = Dense(256, kernel_initializer='glorot_normal', activation='linear')(out2)
    out2 = LeakyReLU(alpha=.2)(out2)
    out2 = Dense(512, kernel_initializer='glorot_normal', activation='linear')(out2)
    out2 = LeakyReLU(alpha=.2)(out2)
    out2 = Dense(540, kernel_initializer='glorot_normal', activation='linear')(out2)
    out2 = LeakyReLU(alpha=.2)(out2)
    out2 = Reshape((9, 4, 15))(out2)
    out2 = Lambda(lambda x: K.dot(K.permute_dimensions(x, (0, 2, 1, 3)),
                                  K.permute_dimensions(x, (0, 2, 3, 1))), output_shape=(4,9,9))(out2)
    out2 = Flatten()(out2)
    out2 = Dense(324, kernel_initializer='glorot_normal', activation='linear')(out2)
    out2 = LeakyReLU(alpha=.2)(out2)
    out2 = Reshape((4, 9, 9))(out2)
    out2 = Lambda(lambda x: K.permute_dimensions(x, (0, 2, 3, 1)))(out2)

    out1 = Lambda(identity, name='output_1')(out1)
    out2 = Lambda(identity, name='output_2')(out2)

    return Model(input, [out1, out2])

model = keras_version()
model.compile(loss=custom_loss(), optimizer='adam')

model.summary()

feature_final = np.random.normal(0,1,[5000, 9, 15])
train_features_array = np.random.normal(0,1,[5000, 9, 5])
train_adj_array = np.random.normal(0,1,[5000, 9, 9, 4])

feature_final = feature_final.reshape(-1, 135)
model.fit(feature_final, [train_features_array, train_adj_array],
                batch_size=50,
                epochs=10
                )

我得到的错误是:

File "...", line 135, in <module>
    epochs=10
File ".../keras/engine/training.py", line 1039, in fit
    validation_steps=validation_steps)
File ".../keras/backend/tensorflow_backend.py", line 2675, in _call
    fetched = self._callable_fn(*array_vals)
File ".../tensorflow/python/client/session.py", line 1458, in __call__
    run_metadata_ptr)
tensorflow.python.framework.errors_impl.InvalidArgumentError: input must be at least 2-dim, received shape: [9]
     [[{{node loss/output_1_loss/MatrixBandPart_1}}]]

在第二次尝试中,我尝试编写两个损失函数并使用损失权重将它们组合起来。

# custom loss function
def custom_loss_1():
    def my_loss_1(y_dummy, pred):
        fcn_loss_1 = tf.nn.softmax_cross_entropy_with_logits(labels=y_dummy[0], logits=pred[0])

        return tf.reduce_mean(fcn_loss_1)
    return my_loss_1

def custom_loss_2():
    def my_loss_2(y_dummy, pred):
        fcn_loss_2 = tf.nn.softmax_cross_entropy_with_logits(labels=y_dummy[1], logits=pred[1])
        fcn_loss_2 = tf.matrix_band_part(fcn_loss_2, 0, -1) - tf.matrix_band_part(fcn_loss_2, 0, 0)

        return tf.reduce_mean(fcn_loss_2)
    return my_loss_2

model.compile(loss={'output_1':custom_loss_1(), 'output_2':custom_loss_2()},
              loss_weights={'output_1':1.0, 'output_2':2.0}, optimizer='adam')

但我收到了

tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [20,25920], In[1]: [324,324]
     [[{{node dense_9/BiasAdd}}]]

在这种情况下,问题实际上可能来自模型本身。这里是model.summary

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
feature_input (InputLayer)      (None, 135)          0                                            
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 128)          17408       feature_input[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU)       (None, 128)          0           dense_5[0][0]                    
__________________________________________________________________________________________________
dense_6 (Dense)                 (None, 256)          33024       leaky_re_lu_5[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU)       (None, 256)          0           dense_6[0][0]                    
__________________________________________________________________________________________________
dense_7 (Dense)                 (None, 512)          131584      leaky_re_lu_6[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU)       (None, 512)          0           dense_7[0][0]                    
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 128)          17408       feature_input[0][0]              
__________________________________________________________________________________________________
dense_8 (Dense)                 (None, 540)          277020      leaky_re_lu_7[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU)       (None, 128)          0           dense_1[0][0]                    
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU)       (None, 540)          0           dense_8[0][0]                    
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 256)          33024       leaky_re_lu_1[0][0]              
__________________________________________________________________________________________________
reshape_2 (Reshape)             (None, 9, 4, 15)     0           leaky_re_lu_8[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU)       (None, 256)          0           dense_2[0][0]                    
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 4, 9, 9)      0           reshape_2[0][0]                  
__________________________________________________________________________________________________
dense_3 (Dense)                 (None, 512)          131584      leaky_re_lu_2[0][0]              
__________________________________________________________________________________________________
flatten_1 (Flatten)             (None, 324)          0           lambda_1[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU)       (None, 512)          0           dense_3[0][0]                    
__________________________________________________________________________________________________
dense_9 (Dense)                 (None, 324)          105300      flatten_1[0][0]                  
__________________________________________________________________________________________________
dense_4 (Dense)                 (None, 45)           23085       leaky_re_lu_3[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU)       (None, 324)          0           dense_9[0][0]                    
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU)       (None, 45)           0           dense_4[0][0]                    
__________________________________________________________________________________________________
reshape_3 (Reshape)             (None, 4, 9, 9)      0           leaky_re_lu_9[0][0]              
__________________________________________________________________________________________________
reshape_1 (Reshape)             (None, 9, 5)         0           leaky_re_lu_4[0][0]              
__________________________________________________________________________________________________
lambda_2 (Lambda)               (None, 9, 9, 4)      0           reshape_3[0][0]                  
__________________________________________________________________________________________________
output_1 (Lambda)               (None, 9, 5)         0           reshape_1[0][0]                  
__________________________________________________________________________________________________
output_2 (Lambda)               (None, 9, 9, 4)      0           lambda_2[0][0]                   
==================================================================================================
Total params: 769,437
Trainable params: 769,437
Non-trainable params: 0
__________________________________________________________________________________________________

如果您认为模型有问题,请检查"model"。这个问题与this question 不同,后者在损失中仅使用一个输出。这也是用 Tensorflow 编写的类似模型的损失函数:

# -- loss function
Y_1 = tf.placeholder(tf.float32, shape=[None, 9, 9, 4])
Y_2 = tf.placeholder(tf.float32, shape=[None, 9, 5])

loss_1 = tf.nn.softmax_cross_entropy_with_logits(labels=Y_2, logits=fcn(X)[0])
loss_2 = tf.nn.softmax_cross_entropy_with_logits(labels=Y_1, logits=fcn(X)[1])
loss_2 = tf.matrix_band_part(loss_2, 0, -1) - tf.matrix_band_part(loss_2, 0, 0)

loss = tf.reduce_mean(loss_1) + 2 * tf.reduce_mean(loss_2)

编辑: 我用实际数据集尝试了答案中的代码,损失函数显示出与代码的 Tensorflow 实现不同的行为。答案中建议的损失函数很快收敛并变成了 nan。我同意 output_1 应该是分类的答案。基于此,我编写了以下损失函数,它的收敛速度仍然不如 Tensorflow 快,但绝对不会爆炸:

def custom_loss_1(model, output_1):
    """ This loss function is called for output2
        It needs to fetch model.output[0] and the output_1 predictions in
        order to calculate fcn_loss_1
    """
    def my_loss(y_true, y_pred):
        fcn_loss_1 = tf.nn.softmax_cross_entropy_with_logits(labels=model.targets[0], logits=output_1)

        return tf.reduce_mean(fcn_loss_1)

    return my_loss

def custom_loss_2():
    """ This loss function is called for output2
        It needs to fetch model.output[0] and the output_1 predictions in
        order to calculate fcn_loss_1
    """
    def my_loss(y_true, y_pred):
        fcn_loss_2 = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
        fcn_loss_2 = tf.matrix_band_part(fcn_loss_2, 0, -1) - tf.matrix_band_part(fcn_loss_2, 0, 0)
        return tf.reduce_mean(fcn_loss_2)

    return my_loss

output_layer_1 = [layer for layer in model.layers if layer.name == 'output_1'][0]
losses = {'output_1': custom_loss_1(model, output_layer_1.output), 'output_2': custom_loss_2()}
model.compile(loss=losses, optimizer='adam', loss_weights=[1.0, 2.0])

【问题讨论】:

    标签: python tensorflow keras loss-function


    【解决方案1】:

    您的代码中有两个问题:

    首先是Lambda里面的K.dot操作需要是K.batch_dot

    我用过:

    def output_mult(x):
        a = K.permute_dimensions(x, (0, 2, 1, 3))
        b = K.permute_dimensions(x, (0, 2, 3, 1))
        return K.batch_dot(a, b)
    
    
    out2 = Lambda(output_mult)(out2)
    

    实际上让 Keras 计算输出维度很有帮助。这是检查代码的简单方法。为了调试它,我首先将自定义损失替换为存在损失 (mse),这很容易检测到。

    第二个问题是自定义损失函数采用一对目标/输出而不是列表。损失函数的参数不是您最初和编辑时假设的张量列表。所以我将你的损失函数定义为

    def custom_loss(model, output_1):
        """ This loss function is called for output2
            It needs to fetch model.output[0] and the output_1 predictions in
            order to calculate fcn_loss_1
        """
        def my_loss(y_true, y_pred):
            fcn_loss_1 = tf.nn.softmax_cross_entropy_with_logits(labels=model.targets[0], logits=output_1)
            fcn_loss_2 = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
            fcn_loss_2 = tf.matrix_band_part(fcn_loss_2, 0, -1) - tf.matrix_band_part(fcn_loss_2, 0, 0)
            return tf.reduce_mean(fcn_loss_2)
    
        return my_loss
    
    

    把它当作

    output_layer_1 = [layer for layer in model.layers if layer.name == 'output_1'][0]
    losses = {'output_1': 'categorical_crossentropy', 'output_2': custom_loss(model, output_layer_1.output)}
    model.compile(loss=losses, optimizer='adam', loss_weights=[1.0, 2.0])
    

    编辑:我最初将 output2 的自定义损失误读为需要 fcn_loss_1 的值,但似乎并非如此,您可以将其写为:

    def custom_loss():
        def my_loss(y_true, y_pred):
            fcn_loss_2 = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
            fcn_loss_2 = tf.matrix_band_part(fcn_loss_2, 0, -1) - tf.matrix_band_part(fcn_loss_2, 0, 0)
            return tf.reduce_mean(fcn_loss_2)
    
        return my_loss
    
    

    并将其用作:

    losses = {'output_1': 'categorical_crossentropy', 'output_2': custom_loss()}
    model.compile(loss=losses, optimizer='adam', loss_weights=[1.0, 2.0])
    
    

    我假设 output_1 的损失是categorical_crossentropy。但即使你需要改变它,最简单的方法是拥有 2 个独立的损失函数。当然,您也可以选择定义一个返回 0 的损失函数和一个返回全部成本的损失函数......但是将“损失(输出 1)+ 2 * 损失(输出 2)”分成两个损失加上权重,恕我直言。

    完整的笔记本: https://colab.research.google.com/drive/1NG3uIiesg-VIt-W9254Sea2XXUYPoVH5

    【讨论】:

    • 我在测试您建议的损失函数后编辑了我的问题,但它发散并变成了 nan。此外,当将其定义为单个损失函数时,它仍然存在分歧。我对其进行了一些修改,并将categorical_crossentropy 明确定义为一个单独的损失函数。我认为在将 loss 设置为 'categorical_crossentropy' 时,不知何故代码没有考虑正确的输入和输出。使用model.fit(feature_final, {'output_1':train_features_array, 'output_2':train_adj_array}时,您介意确认model.targets[0]output_layer_1y_truey_pred 是什么
    • 损失函数的参数是张量而不是列表。这条线labels=y_dummy[1], logits=pred[1] 正在对模型目标和批次索引 1 的输出进行切片,这肯定不是您想要的。损失函数的输入是 y_true(它不是虚拟的)和 y_pred。其中y_true为输出对应的model.target。
    • 另外custom_loss_1 是一个categorical_crossentrophy 损失。 github.com/keras-team/keras/blob/master/keras/losses.py#L68github.com/keras-team/keras/blob/master/keras/backend/…。我只会使用现有的实现。您还可以将代码视为参数应该是什么的示例。
    • 是的,我同意 custom_loss_1categorical_crossentropy 并且它们的行为应该相同,但它们不会,只有前者可以正常工作。如果我将此作为一个单独的问题提出可能会更好。为了完整起见,我适合使用model.fit(feature_final, {'output_1':train_features_array, 'output_2':train_adj_array}, batch_size=100, epochs=300)。感谢您的彻底回复。另外,如果您认为这是一个好问题,那么好问题也需要投票;)
    • @JeroenVermunt 我已经更新了笔记本。 model.targets 现在是 model.outputs 并且 tf.matrix_band_part 是 tf.linalg.band_part。
    猜你喜欢
    • 2020-02-29
    • 2020-04-07
    • 1970-01-01
    • 1970-01-01
    • 2019-09-27
    • 1970-01-01
    • 2021-04-22
    • 2021-10-25
    • 2019-04-02
    相关资源
    最近更新 更多