【问题标题】:what is the fault in my gradient descent algorithm built manually?我手动构建的梯度下降算法有什么问题?
【发布时间】:2020-01-09 13:01:40
【问题描述】:

我是数据科学和机器学习的学习者。我已经编写了一个代码,用于在不使用内置 python 库的情况下对线性回归成本函数进行梯度下降优化。但是,为了确认我的代码是否正确并验证结果,我也使用内置的 python 库实现了相同的功能。 我通过代码获得的系数和截距值与使用内置 python 模块获得的系数和截距值不匹配。请建议我对线性回归的梯度下降优化方式有什么错误?

我的方法:

import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
from sklearn.linear_model import SGDRegressor

Data=pd.DataFrame({'X': list(np.arange(0,10,1)), 'Y': [1,3,2,5,7,8,8,9,10,12]})
Data.head()

sb.scatterplot(x ='X', y = 'Y', data = Data)
plt.show()

#generating column of ones
X0 = np.ones(len(Data)).reshape(-1,1)
#print(X0.shape)

X = Data.drop(['Y'], axis = 1).values
X_new = np.concatenate((X0,X), axis = 1)
#print(X_new)
#print(X_new.shape)

Y = Data.loc[:,['Y']].values
#print(Y)
#print(Y.shape)

# initial theta
theta =np.random.randint(low=0, high=1, size= X_new.shape[1]).reshape(-1,1)
#print(theta.shape)

J_history = []
theta_history = [list(theta.flatten())]

#gradient descent implementation
iterations = 1000
alpha = 0.01
m = len(Y)
for iter in range(1,iterations):
    H = X_new.dot(theta)
    loss = (H-Y)
    J = loss/(2*m)
    J_history.append(J)
    G = X_new.T.dot(loss)/m
    theta_new = theta - alpha*G    
    theta_history.append(list(theta_new.flatten()))
    theta = theta_new

# collecting costs (J) and coefficients (theta_0,theta_1)

theta_history.pop()
J_history = [i[0] for i in J_history]

params = pd.DataFrame()
params['J']=J_history

for i in range(len(theta_history[0])):
    params['theta_'+str(i)]=[k[i] for k in theta_history]

idx = params[params['J']==min(params['J'])].index
values = params.iloc[idx[0]][1:params.shape[1]].tolist()
print('intercept: {}, coeff: {}'.format(values[0],values[1]))

使用内置库:

import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
from sklearn.linear_model import SGDRegressor

Data=pd.DataFrame({'X': list(np.arange(0,10,1)), 'Y': [1,3,2,5,7,8,8,9,10,12]})
Data.head()

sb.scatterplot(x ='X', y = 'Y', data = Data)
plt.show()
model = SGDRegressor(loss = 'squared_loss', learning_rate = 'constant', eta0 = 0.01, max_iter= 1000)
model.fit(Data['X'].values.reshape(-1,1), Data['Y'].values.reshape(-1,1))
print('coeff: {}, intercept: {}'.format(model.coef_, model.intercept_))

【问题讨论】:

  • 你确定你的代码吗?这条线idx = params[params['J']==min(params['J'])].index 在我运行时引发ValueError: Lengths must match to compare

标签: python pandas linear-regression gradient-descent


【解决方案1】:

首先感谢您为自己理解和实现 SGD 算法所做的努力。

现在,回到您的代码。有一些小错误需要更正:

  • 您的Js 不是标量,而是numpy.arrays,但是您使用它们的方式意味着它们被假定为标量,因此在执行代码时会引发错误。
  • 运行你的链后,你必须取误差最小的theta,这个误差实际上是J^2而不是J因为 J 也可能是负数。
  • 您实际使用的 scikit learn SGDRegressor 顾名思义,根据定义是随机的,并且考虑到您的数据集较小,您需要多次运行它并平均其估计值,如果您想从中获得可靠的东西。
  • 你的学习率 0.01 似乎有点大

进行这些更改后,我会从您的代码中获得与 SGDRegressor 的“可比较”结果。

import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
from sklearn.linear_model import SGDRegressor

Data=pd.DataFrame({'X': list(np.arange(0,10,1)), 'Y': [1,3,2,5,7,8,8,9,10,12]})
Data.head()

sb.scatterplot(x ='X', y = 'Y', data = Data)
plt.show()


#generating column of ones
X0 = np.ones(len(Data)).reshape(-1,1)
#print(X0.shape)

X = Data.drop(['Y'], axis = 1).values
X_new = np.concatenate((X0,X), axis = 1)
#print(X_new)
#print(X_new.shape)

Y = Data.loc[:,['Y']].values
#print(Y)
#print(Y.shape)

# initial theta
theta =np.random.randint(low=0, high=1, size= X_new.shape[1]).reshape(-1,1)
#print(theta.shape)

J_history = []
theta_history = [list(theta.flatten())]

#gradient descent implementation
iterations = 2000
alpha = 0.001
m = len(Y)
for iter in range(1,iterations):
    H = X_new.dot(theta)
    loss = (H-Y)
    J = loss/(2*m)
    J_history.append(J[0]**2)
    G = X_new.T.dot(loss)/m
    theta_new = theta - alpha*G    
    theta_history.append(list(theta_new.flatten()))
    theta = theta_new
theta_history.pop()
J_history = [i[0] for i in J_history]


# collecting costs (J) and coefficients (theta_0,theta_1)

params = pd.DataFrame()
params['J']=J_history

for i in range(len(theta_history[0])):
    params['theta_'+str(i)]=[k[i] for k in theta_history]

idx = params[params['J']== params['J'].min()].index
values = params.iloc[idx[0]][1:params.shape[1]].tolist()
print('intercept: {}, coeff: {}'.format(values[0],values[1]))

#> intercept: 0.654041555750147, coeff: 1.2625626277290982


现在让我们看看 scikit 学习模型


from sklearn.linear_model import SGDRegressor

intercepts = []
coefs = []
for _ in range(500):
    model = SGDRegressor(loss = 'squared_loss', learning_rate = 'constant',  eta0 = 0.01, max_iter= 1000)
    model.fit(Data['X'].values.reshape(-1,1), Data['Y'].values.reshape(-1))
    intercepts.append(model.intercept_)
    coefs.append(model.coef_)
intercept = np.concatenate(intercepts).mean()
coef = np.vstack(coefs).mean(0)
print('intercept: {}, coeff: {}'.format( intercept, coef))
#> intercept: 0.6912403374422401, coeff: [1.24932246]

【讨论】:

    猜你喜欢
    • 2014-12-27
    • 1970-01-01
    • 1970-01-01
    • 2016-06-13
    • 1970-01-01
    • 2012-08-17
    • 2018-07-17
    • 2019-02-27
    • 1970-01-01
    相关资源
    最近更新 更多