【发布时间】:2017-09-21 17:33:33
【问题描述】:
我正在尝试使用 numPy 构建逻辑回归模型并在 TensorFlow“入门”示例上对其进行训练:{x: [1, 2, 3, 4], y: [0, -1, -2, -3]} 使用与 tensorFlow 示例相同的学习率和时期,但由于某种原因它无法学习正确的权重和偏差。有什么帮助吗?我是 AI 新手。
代码:
# Compute cost and gradient
def propagate(w, b, X, Y):
m = X.shape[0]
A = sigmoid(np.multiply(w,X) + b)
arr = (np.multiply(w,X) + b) - Y
cost = np.dot(arr, arr)
cost = np.squeeze(cost)
dw = 1/m * X.dot((A-Y).T)
db = 1/m * np.sum(A-Y)
return {"db": db, "dw": dw}, cost
# Gradient Descnet
def optimize(w, b, X, Y, epochs, learning_rate):
costs = []
for i in range(epochs):
grads, cost = propagate(w, b, X, Y)
dw = grads['dw']
db = grads['db']
w = w - learning_rate * dw
b = b - learning_rate * db
if i % 100 == 0:
costs.append(cost)
return {"w":w, "b":b}, {"db": db, "dw": dw}, costs
输出:
w, b, X, Y = np.array([0.3]), -0.3, np.array([1, 2, 3, 4]), np.array([0, -1, -2, -3])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"])) # dw = 6.6074129907
print ("db = " + str(grads["db"])) # db = 2.10776208142
print ("cost = " + str(cost)) # cost = 23.66
params, grads, costs = optimize(w, b, X, Y, epochs= 100, learning_rate = 0.01)
print ("w = " + str(params["w"])) # w = [-4.85038348] (supposed to be about -0.9999969)
print ("b = " + str(params["b"])) # b = -1.86763966366 (supposed to be about 0.99999082)
【问题讨论】:
-
你应该标记 tensorflow
-
请注意,tensorflow 示例使用线性回归而不是逻辑回归。逻辑函数(sigmoid)的输出总是在 0 和 1 之间的范围内,因此它永远无法拟合输出 y:[0, -1, -2, -3]
-
啊,谢谢。我有时会混淆线性回归和逻辑回归,我确实忘记了我使用的是 sigmoid。那么你的建议是什么,使用另一个激活函数?和想法?
标签: python numpy tensorflow artificial-intelligence logistic-regression