【发布时间】:2020-07-02 01:11:16
【问题描述】:
我正在实现一个 LightGBM 分类器 (LGBMClassifier),其超参数由 RandomizedSearchCV 交叉验证 (sklearn 库) 选择。
我为param_distributions 和fit_params 使用了一些任意值,但我应该如何选择它们呢?
就我而言,我正在处理遗传数据,我有一个包含 2,504 行和 220,001 列的数据集。我想知道是否有任何算法/计算可以用来选择每个可测试参数的范围?
这是我从this Kaggle kernel借来的代码sn-p:
fit_params = {"early_stopping_rounds" : 50, # TODO: Isn't it too low for GWAS?
"eval_metric" : 'binary',
"eval_set" : [(X_test,y_test)],
'eval_names': ['valid'],
'verbose': 0,
'categorical_feature': 'auto'}
param_test = {'learning_rate' : [0.01, 0.02, 0.03, 0.04, 0.05, 0.08, 0.1, 0.2, 0.3, 0.4],
'n_estimators' : [100, 200, 300, 400, 500, 600, 800, 1000, 1500, 2000, 3000, 5000],
'num_leaves': sp_randint(6, 50),
'min_child_samples': sp_randint(100, 500),
'min_child_weight': [1e-5, 1e-3, 1e-2, 1e-1, 1, 1e1, 1e2, 1e3, 1e4],
'subsample': sp_uniform(loc=0.2, scale=0.8),
'max_depth': [-1, 1, 2, 3, 4, 5, 6, 7],
'colsample_bytree': sp_uniform(loc=0.4, scale=0.6),
'reg_alpha': [0, 1e-1, 1, 2, 5, 7, 10, 50, 100],
'reg_lambda': [0, 1e-1, 1, 5, 10, 20, 50, 100]}
#number of combinations
n_iter = 200 #(replace 2 by 200, 90 minutes)
#intialize lgbm and lunch the search
lgbm_clf = lgbm.LGBMClassifier(random_state=random_state, silent=True, metric='None', n_jobs=4)
grid_search = RandomizedSearchCV(
estimator=lgbm_clf, param_distributions=param_test,
n_iter=n_iter,
scoring='accuracy',
cv=5,
refit=True,
random_state=random_state,
verbose=True)
让问题更加集中,我该如何选择,例如,early_stopping_rounds 和 n_iter 需要多少次迭代?
【问题讨论】:
标签: python machine-learning scikit-learn cross-validation lightgbm