【发布时间】:2019-03-12 19:47:16
【问题描述】:
从Ufldl softmax regression,代价函数的梯度是 我尝试在 Python 中实现它,但我的损失几乎没有改变:
def update_theta(x, y, theta, learning_rate):
# 4 classes, 3 features
theta_gradients = np.zeros((4, 3)).astype(np.float)
for j in range(4):
for i in range(len(x)):
# p: softmax P(y = j|x, theta)
p = softmax(sm_input(x[i], theta))[y[i]]
# target function {y = j}
p -= 1 if y[i] == j else 0
x[i] = p * x[i]
# sum gradients
theta_gradients[j] += x[i]
theta_gradients[j] = theta_gradients[j] / len(x)
theta = theta.T - learning_rate * theta_gradients
return theta.T
我的前 10 个 epoch 损失和累积:
1.3863767797767788
train acc cnt 3
1.386293406734411
train acc cnt 255
1.3862943723056675
train acc cnt 3
1.3862943609888068
train acc cnt 255
1.386294361121427
train acc cnt 3
1.3862943611198806
train acc cnt 254
1.386294361119894
train acc cnt 4
1.3862943611198937
train acc cnt 125
1.3862943611198937
train acc cnt 125
1.3862943611198937
train acc cnt 125
我不知道我是否误解了方程式,任何建议都将不胜感激!
【问题讨论】:
标签: regression gradient logistic-regression softmax