【问题标题】:GridSearchCV and RandomizedSearchCV in Scikit-learn 0.24.0 or above do not print progress log with n_jobs=-1Scikit-learn 0.24.0 或更高版本中的 GridSearchCV 和 RandomizedSearchCV 不打印 n_jobs=-1 的进度日志
【发布时间】:2022-06-21 07:58:54
【问题描述】:

在 scikit-learn 0.24.0 或更高版本中,当您使用 GridSearchCV 或 RandomizedSearchCV 并设置 n_jobs=-1,设置任何详细数字(1、2、3 或 100) > 不会打印任何进度消息。但是,如果您使用 scikit-learn 0.23.2 或更低版本,一切都会按预期运行,并且 joblib 会打印进度消息。

这是一个示例代码,您可以使用它在 Google Colab 或 Jupyter Notebook 中重复我的实验:

from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV

iris = datasets.load_iris()
parameters = {'kernel':('linear', 'rbf'), 'C':[0.1, 1, 10]}
svc = svm.SVC()

clf = GridSearchCV(svc, parameters, scoring='accuracy', refit=True, n_jobs=-1, verbose=60)
clf.fit(iris.data, iris.target)
print('Best accuracy score: %.2f' %clf.best_score_)

使用 scikit-learn 0.23.2 的结果:

Fitting 5 folds for each of 6 candidates, totalling 30 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 40 concurrent workers.
[Parallel(n_jobs=-1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=-1)]: Batch computation too fast (0.0295s.) Setting batch_size=2.
[Parallel(n_jobs=-1)]: Done   2 out of  30 | elapsed:    0.0s remaining:    0.5s
[Parallel(n_jobs=-1)]: Done   3 out of  30 | elapsed:    0.0s remaining:    0.3s
[Parallel(n_jobs=-1)]: Done   4 out of  30 | elapsed:    0.0s remaining:    0.3s
[Parallel(n_jobs=-1)]: Done   5 out of  30 | elapsed:    0.0s remaining:    0.2s
[Parallel(n_jobs=-1)]: Done   6 out of  30 | elapsed:    0.0s remaining:    0.2s
[Parallel(n_jobs=-1)]: Done   7 out of  30 | elapsed:    0.0s remaining:    0.1s
[Parallel(n_jobs=-1)]: Done   8 out of  30 | elapsed:    0.0s remaining:    0.1s
[Parallel(n_jobs=-1)]: Done   9 out of  30 | elapsed:    0.0s remaining:    0.1s
[Parallel(n_jobs=-1)]: Done  10 out of  30 | elapsed:    0.0s remaining:    0.1s
[Parallel(n_jobs=-1)]: Done  11 out of  30 | elapsed:    0.0s remaining:    0.1s
[Parallel(n_jobs=-1)]: Done  12 out of  30 | elapsed:    0.0s remaining:    0.1s
[Parallel(n_jobs=-1)]: Done  13 out of  30 | elapsed:    0.0s remaining:    0.1s
[Parallel(n_jobs=-1)]: Done  14 out of  30 | elapsed:    0.0s remaining:    0.1s
[Parallel(n_jobs=-1)]: Done  15 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  16 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  17 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  18 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  19 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  20 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  21 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  22 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  23 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  24 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  25 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  26 out of  30 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  27 out of  30 | elapsed:    0.1s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  28 out of  30 | elapsed:    0.1s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  30 out of  30 | elapsed:    0.1s remaining:    0.0s
[Parallel(n_jobs=-1)]: Done  30 out of  30 | elapsed:    0.1s finished
Best accuracy score: 0.98

使用 scikit-learn 0.24.0 的结果(测试到 v1.0.2):

Fitting 5 folds for each of 6 candidates, totaling 30 fits
Best accuracy score: 0.98

在我看来,scikit-learn 0.24.0 或更高版本没有向joblib 发送“verbose”值,因此,当在 GridSearch 或 RandomizedSearchCV 中使用多处理器时,不会打印进度带有“loky”后端。

知道如何在 Google Colab 或 Jupyter Notebook 中解决此问题并为 sklearn 0.24.0 或更高版本打印进度日志吗?

【问题讨论】:

标签: scikit-learn jupyter-notebook google-colaboratory joblib gridsearchcv


【解决方案1】:

这是获取 GridSearchCV 行为并在 Google Colab 中沿途打印进度的一种迂回方式。它需要适应 RandomSearchCV 行为。

这需要创建训练、验证和测试集。我们将使用验证集来测试多个模型,并保存测试集以测试最终的最佳模型。

import gc
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np

from sklearn.neighbors import KernelDensity
from scipy import stats
from sklearn.metrics import classification_report, confusion_matrix, ConfusionMatrixDisplay, accuracy_score
from sklearn.model_selection import RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, ParameterGrid

# This is based on the target and features from my dataset
y = relationships["tmrca"]
X = relationships.drop(columns = ["sample1", "sample2", "total_span_cM", "max_span_cM", "relationship", "tmrca"])

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.25, random_state=42)
print(f"X_train size: {len(X_train):,} \nX_validation size: {len(X_validation):,} \nX_test size: {len(X_test):,}")

这里,我们定义方法。

def random_forest_tvt(para_grid, seed):
    # grid search for the hyperparameters like n_estimators, max_leaf_nodes, etc.
    # fit model on training set, tune paras on validation set, save best paras
    error_min = 1
    count = 0
    clf = RandomForestClassifier(n_jobs=-1, random_state=seed)
    num_fits = len(ParameterGrid(para_grid))
    for g in ParameterGrid(para_grid):
        count += 1
        print(f"fit {count} of {num_fits}")
        print(clf.set_params(**g), "\n")
        clf.fit(X_train, y_train)

        y_predict_validation = clf.predict(X_validation)
        accuracy_measure = accuracy_score(y_validation, y_predict_validation)
        error_validation = 1 - accuracy_measure
        print(f"The accuracy is {accuracy_measure * 100:.2f}%.\n")

        if(error_validation < error_min):
            error_min = error_validation
            best_para = g
    
    y_predict_train =  clf.predict(X_train)
    error_train = 1 - accuracy_score(y_train, y_predict_train)

    y_predict_validation =  clf.predict(X_validation)
    error_validation = 1 - accuracy_score(y_validation, y_predict_validation)

    y_predict_test =  clf.predict(X_test)
    error_test = 1 - accuracy_score(y_test, y_predict_test)
    
    best_para_val = np.fromiter(best_para.values(), dtype=float)
    return(best_para_val[0], best_para_val[1], error_train, error_validation, error_test, clf)

然后我们定义参数grid并调用方法。

seed = 0

# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 1000, stop = 5000, num = 5)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Random Hyperparameter Grid
random_grid = {'n_estimators': n_estimators,
               'max_features': max_features,
               'max_depth': max_depth,
               'min_samples_split': min_samples_split,
               'min_samples_leaf': min_samples_leaf,
               'bootstrap': bootstrap}
print(f"{random_grid}\n")

rf_best_para_max_leaf_nodes, rf_best_para_num_tree, rf_error_train, rf_error_validation, rf_error_test, rf_clf = random_forest_tvt(random_grid, seed)
print(' === Random Forest ===\n', 'Best parameters are: num_of_trees=', rf_best_para_num_tree, ', max_leaf_nodes=', rf_best_para_max_leaf_nodes, '\n', 
      'training error: '+str(rf_error_train)+'\n'+' validation error: '+str(rf_error_validation)+'\n'+' testing error: '+str(rf_error_test)+'\n')

然后是在该方法仍在运行时在 Google Colab 中作为输出打印的前 4 个拟合结果。

{'n_estimators': [1000, 2000, 3000, 4000, 5000], 'max_features': ['auto', 'sqrt'], 'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4], 'bootstrap': [True, False]}

fit 1 of 2160
{'bootstrap': True, 'max_depth': 10, 'max_features': 'auto', 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 1000}
The accuracy is 85.13%.

fit 2 of 2160
{'bootstrap': True, 'max_depth': 10, 'max_features': 'auto', 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 2000}
The accuracy is 85.13%.

fit 3 of 2160
{'bootstrap': True, 'max_depth': 10, 'max_features': 'auto', 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 3000}
The accuracy is 85.13%.

fit 4 of 2160
{'bootstrap': True, 'max_depth': 10, 'max_features': 'auto', 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 4000}
The accuracy is 85.09%.

fit 5 of 2160

然后你可以使用保存在rf_clf中的模型做进一步的微调或者在测试集上调用predict方法。

y_predict_test = rf_clf.predict(X_test)
cal_accuracy = accuracy_score(y_test, y_predict_test)
print(f"The model has an accuracy score of {cal_accuracy * 100:.2f}%.")

对于类似于 RandomSearchCV 的行为,您可以调整代码以对网格中的每个特征进行随机选择,并针对特定数量的组合这样做。您需要进行进一步的调整以使其执行 k 折行为。就目前而言,每个模型将在训练集上测试一次,在验证集上测试一次,每个模型总共测试两次。然后您选择的模型将在测试集上进行第三次测试。

【讨论】:

    猜你喜欢
    • 2017-12-08
    • 1970-01-01
    • 2020-09-30
    • 2018-05-11
    • 2019-05-09
    • 2016-08-04
    • 1970-01-01
    • 2018-09-07
    • 2013-10-01
    相关资源
    最近更新 更多