【发布时间】:2021-09-11 17:16:12
【问题描述】:
我正在尝试计算一个简单线性模型损失的梯度。但是,我面临的问题是,在使用 TensorFlow 时,梯度被计算为“无”。为什么会发生这种情况以及如何使用 TensorFlow 计算梯度?
import numpy as np
import tensorflow as tf
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
inputs = tf.convert_to_tensor(inputs)
targets = tf.convert_to_tensor(targets)
w = tf.random.normal(shape=(2, 3))
b = tf.random.normal(shape=(2,))
print(w, b)
def model(x):
return tf.matmul(x, w, transpose_b = True) + b
def mse(t1, t2):
diff = t1-t2
return tf.reduce_sum(diff * diff) / tf.cast(tf.size(diff), 'float32')
with tf.GradientTape() as tape:
pred = model(inputs)
loss = mse(pred, targets)
print(tape.gradient(loss, [w, b]))
这是使用 PyTorch 的工作代码。梯度按预期计算。
import torch
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
w = torch.randn(2, 3, requires_grad = True)
b = torch.randn(2, requires_grad = True)
def model(x):
return x @ w.t() + b
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
pred = model(inputs)
loss = mse(pred, targets)
loss.backward()
print(w.grad)
print(b.grad)
【问题讨论】:
标签: python tensorflow pytorch gradient