【问题标题】:Why does training Xgboost model with pseudo-Huber loss return a constant test metric?为什么用伪 Huber 损失训练 Xgboost 模型会返回一个恒定的测试指标?
【发布时间】:2021-06-16 04:42:01
【问题描述】:

我正在尝试使用原生伪 Huber 损失 reg:pseudohubererror 来拟合 xgboost 模型。但是,它似乎不起作用,因为培训和测试错误都没有改善。它适用于reg:squarederror。我错过了什么?

代码:

library(xgboost)
n = 1000
X = cbind(runif(n,10,20), runif(n,0,10))
y = X %*% c(2,3) + rnorm(n,0,1)

train = xgb.DMatrix(data  = X[-n,],
                    label = y[-n])

test = xgb.DMatrix(data   = t(as.matrix(X[n,])),
                   label = y[n]) 

watchlist = list(train = train, test = test)

xbg_test = xgb.train(data = train, objective = "reg:pseudohubererror", eval_metric = "mae", watchlist = watchlist, gamma = 1, eta = 0.01, nrounds = 10000, early_stopping_rounds = 100)

结果:

[1] train-mae:44.372692 test-mae:33.085709 
Multiple eval metrics are present. Will use test_mae for early stopping.
Will train until test_mae hasn't improved in 100 rounds.

[2] train-mae:44.372692 test-mae:33.085709 
[3] train-mae:44.372688 test-mae:33.085709 
[4] train-mae:44.372688 test-mae:33.085709 
[5] train-mae:44.372688 test-mae:33.085709 
[6] train-mae:44.372688 test-mae:33.085709 
[7] train-mae:44.372688 test-mae:33.085709 
[8] train-mae:44.372688 test-mae:33.085709 
[9] train-mae:44.372688 test-mae:33.085709 
[10]    train-mae:44.372692 test-mae:33.085709 

【问题讨论】:

    标签: r xgboost


    【解决方案1】:

    这似乎是 pseudohuber loss 的预期行为。在这里,我对找到的目标损失函数的一阶和二阶导数进行了硬编码here,并通过obj=obje 参数输入。如果您运行它并与objective="reg:pseudohubererror" 版本进行比较,您会发现它们是相同的。至于为什么它比平方损失差这么多,不确定。

    set.seed(20)
    
    obje=function(pred, dData) {
      labels=getinfo(dData, "label")
      a=pred
      d=labels
      fir=a^2/sqrt(a^2/d^2+1)/d-2*d*(sqrt(a^2/d^2+1)-1)
      sec=((2*(a^2/d^2+1)^(3/2)-2)*d^2-3*a^2)/((a^2/d^2+1)^(3/2)*d^2)
      return (list(grad=fir, hess=sec))
    }
    
    xbg_test = xgb.train(data = train, obj=obje, eval_metric = "mae", watchlist = watchlist, gamma = 1, eta = 0.01, nrounds = 10000, early_stopping_rounds = 100)
    

    【讨论】:

    • 谢谢!我还尝试使用来自here 的 log-cosh 函数作为 MAE 损失近似的替代方法,并且确实出现了相同的行为。至少我知道语法是正确的。我猜想为什么它在这里不起作用或者对这种损失有什么好处的原因是 StackExchange 的一个问题。
    • here 是 StackExchange 上的后续问题。
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2021-08-02
    • 2020-10-24
    • 1970-01-01
    • 2016-07-18
    • 2020-12-11
    • 2021-06-04
    • 1970-01-01
    相关资源
    最近更新 更多