【问题标题】:Putting together sklearn pipeline+nested cross-validation for KNN regression将 sklearn 管道 + 嵌套交叉验证用于 KNN 回归
【发布时间】:2017-12-22 08:52:13
【问题描述】:

我正在尝试弄清楚如何为sklearn.neighbors.KNeighborsRegressor 构建一个工作流,其中包括:

  • 规范化特征
  • 特征选择(20 个数字特征的最佳子集,没有具体的总数)
  • 在 1 到 20 范围内交叉验证超参数 K
  • 交叉验证模型
  • 使用 RMSE 作为误差指标

scikit-learn 中有很多不同的选项,以至于我在决定自己需要哪些课程时有点不知所措。

除了sklearn.neighbors.KNeighborsRegressor,我想我还需要:

sklearn.pipeline.Pipeline  
sklearn.preprocessing.Normalizer
sklearn.model_selection.GridSearchCV
sklearn.model_selection.cross_val_score

sklearn.feature_selection.selectKBest
OR
sklearn.feature_selection.SelectFromModel

有人能告诉我定义这个管道/工作流的样子吗?我认为应该是这样的:

import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_score, GridSearchCV

# build regression pipeline
pipeline = Pipeline([('normalize', Normalizer()),
                     ('kbest', SelectKBest(f_classif)),
                     ('regressor', KNeighborsRegressor())])

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k':  list(range(1, X.shape[1]+1)),
              'regressor__n_neighbors': list(range(1,21))}

# outer cross-validation on model, inner cross-validation on hyperparameters
scores = cross_val_score(GridSearchCV(pipeline, parameters, scoring="neg_mean_squared_error", cv=10), 
                         X, y, cv=10, scoring="neg_mean_squared_error", verbose=2)

rmses = np.abs(scores)**(1/2)
avg_rmse = np.mean(rmses)
print(avg_rmse)

它似乎没有出错,但我的一些担忧是:

  • 我是否正确执行了嵌套交叉验证以使我的 RMSE 不偏不倚?
  • 如果我想根据最佳 RMSE 选择最终模型,我应该对 cross_val_scoreGridSearchCV 都使用 scoring="neg_mean_squared_error" 吗?
  • SelectKBest, f_classif 是为KNeighborsRegressor 模型选择功能的最佳选项吗?
  • 如何查看:
    • 哪个特征子集被选为最佳
    • 哪个 K 被选为最佳

非常感谢任何帮助!

【问题讨论】:

  • 你的代码看起来很不错。另外,这种方法对我来说是正确的。您是否收到任何错误或意外结果?
  • 您好,感谢您的评论。我更新了我的帖子,提供了更多关于我的担忧的信息。

标签: python scikit-learn pipeline feature-selection hyperparameters


【解决方案1】:

您的代码似乎没问题。

对于scoring="neg_mean_squared_error"cross_val_scoreGridSearchCV,我会这样做以确保一切正常,但唯一的测试方法是删除两者之一,看看结果是否改变.

SelectKBest 是一个不错的方法,但您也可以使用SelectFromModel 甚至是您可以找到here 的其他方法

最后,为了获得最佳参数特征分数,我对您的代码进行了如下修改:

import ...


pipeline = Pipeline([('normalize', Normalizer()),
                     ('kbest', SelectKBest(f_classif)),
                     ('regressor', KNeighborsRegressor())])

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k':  list(range(1, X.shape[1]+1)),
              'regressor__n_neighbors': list(range(1,21))}

# changes here

grid = GridSearchCV(pipeline, parameters, cv=10, scoring="neg_mean_squared_error")

grid.fit(X, y)

# get the best parameters and the best estimator
print("the best estimator is \n {} ".format(grid.best_estimator_))
print("the best parameters are \n {}".format(grid.best_params_))

# get the features scores rounded in 2 decimals
pip_steps = grid.best_estimator_.named_steps['kbest']

features_scores = ['%.2f' % elem for elem in pip_steps.scores_ ]
print("the features scores are \n {}".format(features_scores))

feature_scores_pvalues = ['%.3f' % elem for elem in pip_steps.pvalues_]
print("the feature_pvalues is \n {} ".format(feature_scores_pvalues))

# create a tuple of feature names, scores and pvalues, name it "features_selected_tuple"

featurelist = ['age', 'weight']

features_selected_tuple=[(featurelist[i], features_scores[i], 
feature_scores_pvalues[i]) for i in pip_steps.get_support(indices=True)]

# Sort the tuple by score, in reverse order

features_selected_tuple = sorted(features_selected_tuple, key=lambda 
feature: float(feature[1]) , reverse=True)

# Print
print 'Selected Features, Scores, P-Values'
print features_selected_tuple

使用我的数据的结果:

the best estimator is
Pipeline(steps=[('normalize', Normalizer(copy=True, norm='l2')), ('kbest', SelectKBest(k=2, score_func=<function f_classif at 0x0000000004ABC898>)), ('regressor', KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',
         metric_params=None, n_jobs=1, n_neighbors=18, p=2,
         weights='uniform'))])

the best parameters are
{'kbest__k': 2, 'regressor__n_neighbors': 18}

the features scores are
['8.98', '8.80']

the feature_pvalues is
['0.000', '0.000']

Selected Features, Scores, P-Values
[('correlation', '8.98', '0.000'), ('gene', '8.80', '0.000')]

【讨论】:

  • 谢谢!我看到它显示了用于kbest__k 的参数数量,但是有没有办法查看具体使用了哪些列? SelectKBest 只是尝试第一列,然后是第一列和第二列,等等,还是尝试所选范围内特征的每个排列?
  • @Jake 我编辑了我的帖子。我为特征 p 值和分数添加了代码。我认为它是基于您在评论中所说的排列
  • @Jake 我的答案的第二次更新。现在您可以获取所选功能
  • 非常感谢!
  • @Jake 很高兴我能提供帮助
猜你喜欢
  • 2021-03-25
  • 2021-02-10
  • 2017-11-12
  • 1970-01-01
  • 2020-07-14
  • 2017-03-16
  • 2017-04-10
  • 2012-12-31
  • 2020-09-16
相关资源
最近更新 更多