【问题标题】:How to properly apply stochastic gradient descent to simple neural network?如何正确地将随机梯度下降应用于简单的神经网络?
【发布时间】:2021-03-12 21:09:53
【问题描述】:

我正在尝试从头开始编写 NN,以更好地了解 Keras API 下发生的事情。现在我在将梯度应用于我的损失时遇到了问题。我相信架构和我对 TF 的理解存在一些问题。基本上与 grads 一致,返回 Nones,因此代码返回:

ValueError: No gradients provided for any variable: ['Variable:0', 'Variable:0', 'Variable:0', 'Variable:0', 'Variable:0', 'Variable:0'].

这是我的代码。它非常简单,但我只想在将其变形为模型类之前完成它。

input_shape = x_train.shape[1]
n_hidden_1 = 32
n_hidden_2 = 8
output_shape = 1
epochs = 1

W_1 = tf.Variable(tf.random.normal([input_shape,n_hidden_1]))
W_2 = tf.Variable(tf.random.normal([n_hidden_1,n_hidden_2]))
W_output = tf.Variable(tf.random.normal([n_hidden_2,output_shape]))
    
B_1 = tf.Variable(tf.random.normal([n_hidden_1]))
B_2 = tf.Variable(tf.random.normal([n_hidden_2]))
B_output = tf.Variable(tf.random.normal([output_shape]))

var_list= [W_1, W_2, W_output, B_1, B_2, B_output]

opt = tf.keras.optimizers.SGD(learning_rate=0.1)


for epoch in range(epochs):
    
    input_tensor = tf.convert_to_tensor(x_train, dtype=tf.float32)
    size = input_tensor.shape[0]
    
    labels = tf.convert_to_tensor(y_train, dtype=tf.float32)
    labels = tf.reshape(labels, (size,1))
    
    #1_layer
    layer_1 = tf.matmul(input_tensor, W_1)
    layer_1 = tf.add(layer_1, B_1)
    layer_1 = tf.nn.relu(layer_1)

    #2_layer
    layer_2 = tf.matmul(layer_1, W_2)
    layer_2 = tf.add(layer_2, B_2)
    layer_2 = tf.nn.relu(layer_2)

    #output layer
    output = tf.matmul(layer_2, W_output)
    output = tf.add(output, B_output)
    
    with tf.GradientTape() as tape:
        _loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels, output))
        
    grads = tape.gradient(_loss, var_list)
    grads_and_vars = zip(grads, var_list)
    optimizer.apply_gradients(grads_and_vars)

【问题讨论】:

标签: python neural-network data-science tensorflow2.0 gradient-descent


【解决方案1】:

嗯,我的问题是我没有清楚地理解 TF2 中的梯度计算是如何工作的。对于 TF1 可以传递张量来最小化,对于 TF2 则需要在 GradientTape 中传递一个可调用函数。

这是我的代码,它仍然很简单,但现在可以正常工作了。

input_shape = x_train.shape[1]
n_hidden_1 = 32
n_hidden_2 = 8
output_shape = 1
epochs = 100

class model():
    def __init__(self, input_tensor, lables):
        self.W_1 = tf.Variable(tf.random.normal([input_shape,n_hidden_1]))
        self.W_2 = tf.Variable(tf.random.normal([n_hidden_1,n_hidden_2]))
        self.W_output = tf.Variable(tf.random.normal([n_hidden_2,output_shape]))
        self.B_1 = tf.Variable(tf.random.normal([n_hidden_1]))
        self.B_2 = tf.Variable(tf.random.normal([n_hidden_2]))
        self.B_output = tf.Variable(tf.random.normal([output_shape]))    
        self.var_list= [self.W_1, self.W_2, self.W_output, self.B_1, self.B_2, self.B_output]
    
    def train(self):
        layer_1 = tf.matmul(input_tensor, self.W_1)
        layer_1 = tf.add(layer_1, self.B_1)
        layer_1 = tf.nn.relu(layer_1)
        #2_layer
        layer_2 = tf.matmul(layer_1, self.W_2)
        layer_2 = tf.add(layer_2, self.B_2)
        layer_2 = tf.nn.relu(layer_2)
        #output layer
        output = tf.matmul(layer_2, self.W_output)
        output = tf.add(output, self.B_output)   
        return output

    def loss(self, output):
        loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels, output))
        return loss

opt = tf.keras.optimizers.SGD(learning_rate=0.1)


input_tensor = tf.convert_to_tensor(x_train, dtype=tf.float32)
size = input_tensor.shape[0]
labels = tf.convert_to_tensor(y_train, dtype=tf.float32)
labels = tf.reshape(labels, (size,1))

model = model(input_tensor, labels)

for epoch in range(epochs):
    
    with tf.GradientTape() as tape:
        output = model.train()
        loss = model.loss(output)
    
    grads = tape.gradient(loss, model.var_list)
    grads_and_vars = zip(grads, model.var_list)
    opt.apply_gradients(grads_and_vars)
    print(loss)

【讨论】:

  • 代码很难看……最好的做法是每一层都应该继承自tf.keras.layers.Layer。阅读this了解更多详情
猜你喜欢
  • 1970-01-01
  • 2014-10-10
  • 2020-10-17
  • 2019-04-16
  • 2017-02-19
  • 2015-07-11
  • 2011-08-24
  • 2017-07-30
  • 2020-02-10
相关资源
最近更新 更多