【问题标题】:RuntimeWarning: overflow encountered in double_scalars in gradient descentRuntimeWarning:在梯度下降中的 double_scalars 中遇到溢出
【发布时间】:2020-05-31 11:35:27
【问题描述】:

我正在尝试将梯度下降算法实现为线性回归。我想弄清楚了数学部分,但它在 Python 中不起作用。

from sklearn.datasets import load_boston
import pandas as pd
import numpy as np
import random

data = load_boston()
df = pd.DataFrame(data['data'], columns=data['feature_names'])
y = data['target']
X = df.TAX

def RMSE(y, y_hat):
    return np.sqrt(sum((y - y_hat) ** 2) / len(y))

def partial_k(x, y, y_hat):
    n = len(y)
    gradient = 0
    for x_i, y_i, y_hat_i in zip(list(x), list(y), list(y_hat)):
        gradient += (y_i - y_hat_i) * x_i
    return -2 / n * gradient

def partial_b(y, y_hat):
    n = len(y)
    gradient = 0
    for y_i, y_hat_i in zip(list(y), list(y_hat)):
        gradient += (y_i - y_hat_i)
    return -2 / n * gradient

def gradient(X, y, n, alpha=0.01, loss=RMSE):
    loss_min = float('inf')

    k = random.random() * 200 - 100
    b = random.random() * 200 - 100

    for i in range(n):
        y_hat = k * X + b
        loss_new = loss(y, y_hat)
        if loss_new < loss_min:
            loss_min = loss_new
            print(f"round: {i}, k: {k}, b: {b}, {loss}: {loss_min}")
        k_gradient = partial_k(X, y, y_hat)
        b_gradient = partial_b(y, y_hat)
        k += -k_gradient * alpha
        b += -b_gradient * alpha
    return (k, b)
gradient(X, y, 200)

该脚本仅适用于第一次迭代,然后抛出警告;

/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:5: RuntimeWarning: overflow encountered in double_scalars
  """
/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:12: RuntimeWarning: overflow encountered in double_scalars
  if sys.path[0] == '':
/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:29: RuntimeWarning: invalid value encountered in double_scalars
/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:30: RuntimeWarning: invalid value encountered in double_scalars

【问题讨论】:

    标签: python gradient-descent


    【解决方案1】:

    看起来您的一项操作正在溢出类型。 What are the causes of overflow encountered in double_scalars besides division by zero?

    如果您可以使用调试器运行您的代码,您将能够找到导致溢出的行,并将您的类型更改为更大的类型。

    【讨论】:

      【解决方案2】:

      梯度下降的主要特点是将theta的值减小到最优,但是当alpha(学习率)很大时,梯度下降可能会超过最小值并在图上以振荡的方式增加。这是主要原因为什么这里会发生溢出。

      尝试降低 alpha(学习率)或应用特征归一化。

      【讨论】:

        猜你喜欢
        • 2019-05-08
        • 2020-12-01
        • 2016-10-13
        • 2019-04-10
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        相关资源
        最近更新 更多