【问题标题】:ValueError: No gradients provided for any variable in my custom loss - Why?ValueError:没有为我的自定义损失中的任何变量提供渐变 - 为什么?
【发布时间】:2021-04-30 21:55:43
【问题描述】:

这是我的代码(你可以复制粘贴执行)

import tensorflow as tf
import numpy as np
from sklearn.preprocessing import MinMaxScaler

x = np.array([[1, 2], [3, 4], [5, 6], [7, 8]]).astype(np.float32)
y = np.array([[-1], [3], [7], [-2]]).astype(np.float32)

# scale x and y
x_scaler = MinMaxScaler()
x_scaler.fit(x)
x_sc = x_scaler.transform(x)

y_scaler = MinMaxScaler()
y_scaler.fit(y)
y_sc = y_scaler.transform(y)

batch_size = 2
ds = tf.data.Dataset.from_tensor_slices((x_sc, y_sc)).batch(batch_size=batch_size)

# create the model
model = tf.keras.Sequential(
    [
        tf.keras.layers.Input(shape=(2,)),
        tf.keras.layers.Dense(units=3, activation='relu'),
        tf.keras.layers.Dense(units=1)
    ]
)

optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)

def standard_loss(y_batch, y_pred, y_min_max):
    batches = y_pred.shape[0]
    loss = 0.0
    y_true_unsc = tf.convert_to_tensor(y_min_max.inverse_transform(y_batch), tf.float32)
    y_pred_unsc = tf.convert_to_tensor(y_min_max.inverse_transform(y_pred), tf.float32)

    for batch in range(batches):
        loss += tf.math.reduce_mean(tf.math.square(y_true_unsc[batch] - y_pred_unsc[batch]))

    return loss / batches

# training loop
epochs = 1
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch, ))
for step, (x_batch, y_batch) in enumerate(ds):
    with tf.GradientTape() as tape:
        y_pred = model(x_batch, training=True)
        loss_value = standard_loss(y_batch, y_pred, y_scaler)

    grads = tape.gradient(loss_value, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))

问题出在我的成本函数 (standard_loss) 中。当我不取消缩放我的数据时,一切都会更好,如下所示:

def standard_loss(y_batch, y_pred, y_min_max):
batches = y_pred.shape[0]
loss = 0.0

for batch in range(batches):
    loss += tf.math.reduce_mean(tf.math.square(y_batch[batch] - y_pred[batch]))

return loss / batches

但是当我像上面那样让它时,我得到了这个错误:

ValueError: No gradients provided for any variable: ['dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0'].

我需要对我的数据进行缩放以将其用于其他计算。

谁能帮我理解为什么会这样?

编辑 1:

问题是由于磁带(在 tf.GradientTape() 中作为磁带)记录了所有的操作,这一系列操作在计算梯度时它会以相反的方向上升。我现在的目标是弄清楚如何在没有“磁带”保存它并在计算梯度时误入歧途的情况下取消缩放我的 y_pred 变量。想法?

编辑 2:

在我的自定义丢失中,我的 unscale 操作是一个 numpy 操作,并且由于我们离开了 tensorflow 字段,因此“磁带”不会记录此操作。这就是出现错误的原因。因此,我将寻找一种方法来使用 tensorflow 操作来缩放我的数据,以便使用 tensorflow 操作来取消缩放它们。

解决方案:

EDIT 2 是解决方案。现在,一切正常。

【问题讨论】:

标签: tensorflow loss-function error-handling


【解决方案1】:

在我的自定义丢失中,我的 unscale 操作是一个 numpy 操作,并且由于我们离开了 tensorflow 字段,因此“磁带”不会记录此操作。这就是出现错误的原因。一种解决方案是使用 tensorflow 操作来缩放和取消缩放数据,以允许磁带记录路径。请参阅下面的代码,

import tensorflow as tf
import numpy as np

x = tf.convert_to_tensor([[1, 2], [3, 4], [5, 6], [7, 8]], dtype=tf.float32)
y = tf.convert_toètensor([[-1], [3], [7], [-2]], dtype=tf.float32)

# retrieve x and y min max
xmin, xmax = tf.reduce_min(x, axis=0), tf.reduce_max(x, axis=0)
ymin, ymax = tf.reduce_min(y, axis=0), tf.reduce_max(y, axis=0)

batch_size = 2
ds = tf.data.Dataset.from_tensor_slices((x, y)).batch(batch_size)

# create the model
model = tf.keras.Sequential(
    [
        tf.keras.layers.Input(shape=(2,)),
        tf.keras.layers.Dense(units=3, activation='relu'),
        tf.keras.layers.Dense(units=1)
    ]
)

optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)

def standard_loss(y_batch, y_pred):
    # unscale y_pred (note that y_batch has never been scaled)
    y_pred_unsc = y_pred * (ymax - ymin) + ymin

    return tf.reduce_mean(tf.square(y_batch - y_pred_unsc)

# training loop
epochs = 1
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch, ))
for step, (x_batch, y_batch) in enumerate(ds):
    with tf.GradientTape() as tape:
        # scale data (we see that I do not quit tensorflow operations)
        x_scale = (x_batch - xmin)/(xmax - xmin)
        y_pred = model(x_scale, training=True)
        loss_value = standard_loss(y_batch, y_pred)

    grads = tape.gradient(loss_value, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))

【讨论】:

    猜你喜欢
    • 2021-06-13
    • 2021-05-08
    • 2022-10-24
    • 2016-10-19
    • 1970-01-01
    • 2022-12-04
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多