【发布时间】:2017-11-03 19:57:01
【问题描述】:
我正在运行一个非常简单的 tensorflow 程序
W = tf.Variable([.3],tf.float32)
b = tf.Variable([-.3],tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W*x + b
y = tf.placeholder(tf.float32)
squared_error = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_error)
optimizer = tf.train.GradientDescentOptimizer(0.1)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as s:
file_writer = tf.summary.FileWriter('../../tfLogs/graph',s.graph)
s.run(init)
for i in range(1000):
s.run(train,{x:[1,2,3,4],y:[0,-1,-2,-3]})
print(s.run([W,b]))
这给了我
[array([ nan], dtype=float32), array([ nan], dtype=float32)]
我做错了什么?
【问题讨论】:
标签: python tensorflow deep-learning python-3.6 gradient-descent