【问题标题】:Computing gradient in Tensorflow vs PyTorch在 TensorFlow 与 PyTorch 中计算梯度
【发布时间】:2021-09-11 17:16:12
【问题描述】:

我正在尝试计算一个简单线性模型损失的梯度。但是,我面临的问题是,在使用 TensorFlow 时,梯度被计算为“无”。为什么会发生这种情况以及如何使用 TensorFlow 计算梯度?

import numpy as np
import tensorflow as tf

inputs = np.array([[73, 67, 43], 
                   [91, 88, 64], 
                   [87, 134, 58], 
                   [102, 43, 37], 
                   [69, 96, 70]], dtype='float32')

targets = np.array([[56, 70], 
                    [81, 101], 
                    [119, 133], 
                    [22, 37], 
                    [103, 119]], dtype='float32')

inputs = tf.convert_to_tensor(inputs)
targets = tf.convert_to_tensor(targets)

w = tf.random.normal(shape=(2, 3))
b = tf.random.normal(shape=(2,))
print(w, b)

def model(x):
  return tf.matmul(x, w, transpose_b = True) + b

def mse(t1, t2):
  diff = t1-t2
  return tf.reduce_sum(diff * diff) / tf.cast(tf.size(diff), 'float32')

with tf.GradientTape() as tape:
  pred = model(inputs)
  loss = mse(pred, targets)

print(tape.gradient(loss, [w, b]))

这是使用 PyTorch 的工作代码。梯度按预期计算。

import torch

inputs = np.array([[73, 67, 43], 
                   [91, 88, 64], 
                   [87, 134, 58], 
                   [102, 43, 37], 
                   [69, 96, 70]], dtype='float32')

targets = np.array([[56, 70], 
                    [81, 101], 
                    [119, 133], 
                    [22, 37], 
                    [103, 119]], dtype='float32')

inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)

w = torch.randn(2, 3, requires_grad = True)
b = torch.randn(2, requires_grad = True)

def model(x):
  return x @ w.t() + b

def mse(t1, t2):
  diff = t1 - t2
  return torch.sum(diff * diff) / diff.numel()

pred = model(inputs)
loss = mse(pred, targets)
loss.backward()

print(w.grad)
print(b.grad)

【问题讨论】:

    标签: python tensorflow pytorch gradient


    【解决方案1】:

    您的代码不起作用,因为在 tensorflow 中,仅针对 tf.Variables 计算梯度。创建层时,TF 会自动将其权重和偏差标记为变量(除非您指定 trainable=False)。

    因此,为了使您的代码正常工作,您需要做的就是用tf.Variable 包装您的wb

    w = tf.Variable(tf.random.normal(shape=(2, 3)), name='w')
    b = tf.Variable(tf.random.normal(shape=(2,)), name='b')
    

    使用这些行来定义您的权重和偏差,您将在最终打印中获得实际值。

    【讨论】: