【问题标题】:Error with custom loss function in CVAE model using Keras使用 Keras 的 CVAE 模型中的自定义损失函数出错
【发布时间】:2020-09-08 10:01:45
【问题描述】:

我正在尝试构建卷积变分自动编码器 (CVAE),因此我必须构建 vae_loss() 函数,它是 MSE 和 KL 散度损失函数的组合。如下所示:

def vae_loss(y_true, y_pred):
    # mse loss
    reconstruction_loss = K.sum(K.square(y_true - y_pred), axis=-1)
    # kl loss
    kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
    kl_loss = K.sum(kl_loss, axis=-1)
    kl_loss *= -0.5
    weight = 0.
    return reconstruction_loss + (weight * kl_loss)

我的模型看起来像:

input_img = Input(shape=(image_resolution(), image_resolution(), 1))
latent_dim = 64     #bottleneck

# ENCODER
e = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
e = Conv2D(32, (3, 3), activation='relu', padding='same')(e)
e = MaxPooling2D((2, 2))(e)
e = Conv2D(64, (3, 3), activation='relu', padding='same')(e)
e = MaxPooling2D((2, 2))(e)
e = Conv2D(64, (3, 3), activation='relu', padding='same')(e)
e = MaxPooling2D((2, 2))(e)
e = Conv2D(128, (3, 3), activation='relu', padding='same')(e)
l = Flatten()(e)
#l = Dense(200, activation='relu')(l)                             #transition linear layer
l = Dense(latent_dim, activation='softmax')(l)                    #latent dimension: maximal compresion(bottleneck)

#Stochastic latent space
z_mean = Dense(latent_dim, name = 'z_mean')(l)
z_log_var = Dense(latent_dim, name = 'z_log_var')(l)
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])

# DECODER
d = Reshape((8, 8, 1))(l)
d = Conv2DTranspose(128, (3, 3), strides=2, activation='relu', padding='same')(d)
d = BatchNormalization()(d)
d = Conv2DTranspose(64, (3, 3), strides=2, activation='relu', padding='same')(d)
d = BatchNormalization()(d)
d = Conv2DTranspose(64, (3, 3), strides=2, activation='relu', padding='same')(d)
d = BatchNormalization()(d)
d = Conv2DTranspose(32, (3, 3), activation='relu', padding='same')(d)
decoded = Conv2D(1, (3, 3), activation='linear', padding='same')(d)

autoencoder = Model(input_img, decoded)
autoencoder.summary()
autoencoder.compile(optimizer='Nadam', loss=vae_loss, metrics=[coeff_determination])

当我将损失函数更改为自定义损失函数时,出现以下错误:

Traceback (most recent call last):
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
  @tf.function
  def has_init_scope():
    my_constant = tf.constant(1.)
    with tf.init_scope():
      added = my_constant * 2
The graph tensor has name: z_log_var/Identity:0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/user/PycharmProjects/user1/Try/VAE.py", line 108, in <module>
    validation_data=(test_input, test_label)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 848, in fit
    tmp_logs = train_function(iterator)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
    result = self._call(*args, **kwds)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 644, in _call
    return self._stateless_fn(*args, **kwds)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2420, in __call__
    return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1665, in _filtered_call
    self.captured_inputs)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 598, in call
    ctx=ctx)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 74, in quick_execute
    "tensors, but found {}".format(keras_symbolic_tensors))
tensorflow.python.eager.core._SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'z_log_var/Identity:0' shape=(None, 64) dtype=float32>, <tf.Tensor 'z_mean/Identity:0' shape=(None, 64) dtype=float32>]

我不知道我做错了什么或我错过了什么。我目前正在使用带有 Keras 2.3.1 的 Tensorflow 2.2

【问题讨论】:

    标签: python tensorflow keras loss-function


    【解决方案1】:

    你能试试这个example中提到的custom_loss实现吗?

    试试这个

    def vae_loss(z_mean, z_log_var):
      def loss(y_true, y_pred):
        # mse loss
        reconstruction_loss = K.sum(K.square(y_true - y_pred), axis=-1)
        # kl loss
        kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
        kl_loss = K.sum(kl_loss, axis=-1)
        kl_loss *= -0.5
        weight = 0.
        return reconstruction_loss + (weight * kl_loss)
      return loss
    

    更新model.compile行如下

    autoencoder.compile(optimizer='Nadam', loss=vae_loss(z_mean, z_log_var), metrics=[coeff_determination])
    

    【讨论】:

    • 有了这个,我得到(经过大量回溯)以下错误:TypeError:无法将 类型的对象转换为张量。内容:.loss at 0x15ea52e18>。考虑将元素转换为支持的类型。
    • 与上面发布的相同错误回溯:(...) tensorflow.python.eager.core._SymbolicException: 急切执行函数的输入不能是 Keras 符号张量,但发现 [, ]
    • 在链接的示例中(但不在此答案中)有一个参数 run_eagerly=True,它可能很重要。
    猜你喜欢
    • 1970-01-01
    • 2021-10-25
    • 2020-09-07
    • 2021-12-14
    • 1970-01-01
    • 1970-01-01
    • 2020-02-01
    • 2020-06-21
    • 2019-03-10
    相关资源
    最近更新 更多