【发布时间】:2020-09-19 17:26:39
【问题描述】:
在线性回归中实施梯度下降算法时,我的算法所做的预测和得到的回归线是错误的输出。任何人都可以看看我的实现并帮助我吗?另外,请指导我如何知道在特定回归问题中选择“学习率”和“迭代次数”的值?
theta0 = 0 #first parameter
theta1 = 0 #second parameter
alpha = 0.001 #learning rate (denoted by alpha)
num_of_iterations = 100 #total number of iterations performed by Gradient Descent
m = float(len(X)) #total number of training examples
for i in range(num_of_iterations):
y_predicted = theta0 + theta1 * X
derivative_theta0 = (1/m) * sum(y_predicted - Y)
derivative_theta1 = (1/m) * sum(X * (y_predicted - Y))
temp0 = theta0 - alpha * derivative_theta0
temp1 = theta1 - alpha * derivative_theta1
theta0 = temp0
theta1 = temp1
print(theta0, theta1)
y_predicted = theta0 + theta1 * X
plt.scatter(X,Y)
plt.plot(X, y_predicted, color = 'red')
plt.show()
【问题讨论】:
-
总的来说,它看起来是正确的。你可能想绘制你的错误与训练步骤;可能是您需要更多步骤或更大的 alpha。
-
这样的问题一般最好放在codereview.stackexchange.com
标签: python machine-learning regression linear-regression gradient-descent