【发布时间】:2016-02-18 06:32:27
【问题描述】:
我想做一个简单的神经网络,它应该只实现 XOR 门。我在 python 中使用 TensorFlow 库。 对于异或门,我训练的唯一数据是完整的真值表,应该足够了吧?过度优化是我期望很快发生的事情。代码的问题是 weights 和 biases 没有更新。不知何故,它仍然为我提供了 100% 的准确度,偏差和权重为零。
x = tf.placeholder("float", [None, 2])
W = tf.Variable(tf.zeros([2,2]))
b = tf.Variable(tf.zeros([2]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
y_ = tf.placeholder("float", [None,1])
print "Done init"
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.75).minimize(cross_entropy)
print "Done loading vars"
init = tf.initialize_all_variables()
print "Done: Initializing variables"
sess = tf.Session()
sess.run(init)
print "Done: Session started"
xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array([[1], [0], [0], [0]])
acc=0.0
while acc<0.85:
for i in range(500):
sess.run(train_step, feed_dict={x: xTrain, y_: yTrain})
print b.eval(sess)
print W.eval(sess)
print "Done training"
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "Result:"
acc= sess.run(accuracy, feed_dict={x: xTrain, y_: yTrain})
print acc
B0 = b.eval(sess)[0]
B1 = b.eval(sess)[1]
W00 = W.eval(sess)[0][0]
W01 = W.eval(sess)[0][1]
W10 = W.eval(sess)[1][0]
W11 = W.eval(sess)[1][1]
for A,B in product([0,1],[0,1]):
top = W00*A + W01*A + B0
bottom = W10*B + W11*B + B1
print "A:",A," B:",B
# print "Top",top," Bottom: ", bottom
print "Sum:",top+bottom
我正在学习http://tensorflow.org/tutorials/mnist/beginners/index.md#softmax_regressions 的教程 在最后的 for 循环中,我将结果从矩阵中打印出来(如链接中所述)。
谁能指出我的错误以及我应该做些什么来解决它?
【问题讨论】:
标签: python neural-network tensorflow