【发布时间】:2021-10-11 13:23:52
【问题描述】:
我有以下结合了预处理、特征选择和估计器的管道:
## Selecting categorical and numeric features
numerical_ix = X.select_dtypes(include=np.number).columns
categorical_ix = X.select_dtypes(exclude=np.number).columns
## Create preprocessing pipelines for each datatype
numerical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[
('encoder', OrdinalEncoder()),
('scaler', StandardScaler())])
## Putting the preprocessing steps together
preprocessor = ColumnTransformer([
('numerical', numerical_transformer, numerical_ix),
('categorical', categorical_transformer, categorical_ix)],
remainder='passthrough')
## Create example pipeline with kNN
example_pipe = Pipeline(steps=[
('preprocessor', preprocessor),
('selector', SelectKBest(k=len(X.columns))), # keep the same amount of columns for now
('classifier', KNeighborsClassifier())
])
cross_val_score(example_pipe, X, y, cv=5, scoring='accuracy').mean()
我编写了以下代码,为SelectKBest“尝试”不同的ks 并绘制它。
但我如何同时在 kNN 分类器中寻找 k 的最佳值?我不一定要绘制它,只是找到最优值。我的猜测是GridSearchCV,但我不知道如何将其应用于管道中的不同步骤。
k_range = list(range(1, len(X.columns))) # 1 until 18
k_scores = []
for k in k_range:
example_pipe = Pipeline(steps=[
('preprocessor', preprocessor),
('selector', SelectKBest(k=k)), # keep the same amount of columns for now
('classifier', KNeighborsClassifier())])
score = cross_val_score(example_pipe, X, y, cv=5, scoring='accuracy').mean()
k_scores.append(score)
plt.plot(k_range, k_scores)
plt.xlabel('Value of k in SelectKBEST')
plt.xticks(k_range, rotation=20)
plt.ylabel('Cross-Validated Accuracy')
【问题讨论】:
标签: python scikit-learn classification pipeline hyperparameters