【发布时间】:2019-05-23 01:41:30
【问题描述】:
我正在尝试在XGBClassifier 中使用sample_weight 来提高我们其中一个模型的性能。
但是,sample_weight 参数似乎没有按预期工作。 sample_weight 对于这个问题非常重要。请在下面查看我的代码。
基本上模型的拟合似乎没有考虑sample_weight参数——它从0.5的AUC开始并从那里下降,推荐0或1n_estimators。基础数据没有任何问题——我们使用其他工具使用样本权重构建了一个非常好的模型,获得了良好的 Gini。
提供的样本数据没有正确地表现出这种行为,但在整个过程中给出了一致的随机种子,我们可以看到无论是否提供weight/sample_weight,模型对象都是相同的。
我尝试了 xbgoost 库中的不同组件,它们同样具有可以定义权重的参数,但没有运气:
XGBClassifier.fit()
XGBClassifier.train()
Xgboost()
XGB.fit()
XGB.train()
Dmatrix()
XGBGridSearchCV()
我还尝试了 fit_params=fit_params 作为参数以及 weight=weight 和 sample_weight=sample_weight 变体
代码:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
df = pd.DataFrame(columns =
['GB_FLAG','sample_weight','f1','f2','f3','f4','f5'])
df.loc[0] = [0,1,2046,10,625,8000,2072]
df.loc[1] = [0,0.86836,8000,10,705,8800,28]
df.loc[2] = [1,1,2303.62,19,674,3000,848]
df.loc[3] = [0,0,2754.8,2,570,16300,46]
df.loc[4] = [1,0.103474,11119.81,6,0,9500,3885]
df.loc[5] = [1,0,1050.83,19,715,3000,-5]
df.loc[6] = [1,0.011098,7063.35,11,713,19700,486]
df.loc[7] = [0,0.972176,6447.16,18,681,11300,1104]
df.loc[8] = [1,0.054237,7461.27,18,0,0,4]
df.loc[9] = [0,0.917026,4600.83,8,0,10400,242]
df.loc[10] = [0,0.670026,2041.8,21,716,11000,3]
df.loc[11] = [1,0.112416,2413.77,22,750,4600,271]
df.loc[12] = [0,0,251.81,17,806,3800,0]
df.loc[13] = [1,0.026263,20919.2,17,684,8100,1335]
df.loc[14] = [0,1,1504.58,15,621,6800,461]
df.loc[15] = [0,0.654429,9227.69,4,0,22500,294]
df.loc[16] = [0,0.897051,6960.31,22,674,5400,188]
df.loc[17] = [1,0.209862,4481.42,18,745,11600,0]
df.loc[18] = [0,1,2692.96,22,651,12800,2035]
y = np.asarray(df['GB_FLAG'])
X = np.asarray(df.drop(['GB_FLAG'], axis=1))
X_traintest, X_valid, y_traintest, y_valid = train_test_split(X, y,
train_size=0.7, stratify=y, random_state=1337)
traintest_sample_weight = X_traintest[:,0]
valid_sample_weight = X_valid[:,0]
X_traintest = X_traintest[:,1:]
X_valid = X_valid[:,1:]
model = XGBClassifier()
eval_set = [(X_valid, y_valid)]
model.fit(X_traintest, y_traintest, eval_set=eval_set, eval_metric="auc", e
early_stopping_rounds=50, verbose = True, sample_weight =
traintest_sample_weight)
在使用xgboost 进行建模时如何使用样本权重?
【问题讨论】:
标签: python pandas xgboost sample