【问题标题】:How to iterate through different sci-kit learn classifiers如何遍历不同的 sci-kit 学习分类器
【发布时间】:2020-07-17 07:47:24
【问题描述】:

我正在使用 scikit-learn 运行一堆模型来解决分类问题。

如何迭代不同的 scikit-learn 模型?

from sklearn.ensemble import AdaBoostClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.dummy import DummyClassifier

classifiers_name = ['AdaBoostClassifier',
                    'BernoulliNB',
                    'DummyClassifier']

def fitting_classifier(clf, X_train, y_train):
    return clf.fit(X_train, y_train)

for clf_n in classifiers_name:
    locals()['results_' + clf_n] = fitting_classifier(locals()[clf_n + str(())], X_train, y_train)

我似乎在这部分代码中遇到了错误:fitting_classifier(locals()[clf_n + str(())], X_train, y_train)。显示的错误是:

<ipython-input-31-cccf30ff4392> in summary_scores(file_path, image_format, scores)
    140         for clf_sn in classifiers_name:
--> 141             locals()['results_' + clf_n] = fitting_classifier(locals()[clf_n + str(())], X_train, y_train)
    142 
    143         # results_AdaBoostClassifier = fitting_classifier(AdaBoostClassifier(), X_train, y_train)

KeyError: 'AdaBoostClassifier()'

对此的任何帮助将不胜感激。谢谢。

【问题讨论】:

标签: python loops for-loop scikit-learn globals


【解决方案1】:

因为你没有提到这样做的目的。 为什么要迭代不同的 scikit-learn 模型?

如果您想找出上述哪种模型更适合并表现更好,您可以使用类似这样的方法

# -------- Cross validate model with Kfold stratified cross val ---------------

    kfold = StratifiedKFold(n_splits=10)

# Modeling step Test differents algorithms
    classifiers = ['AdaBoostClassifier',
                    'BernoulliNB',
                    'DummyClassifier']
    results = []
    for model in classifiers :
        results.append(cross_val_score(model, X_train, y = y_train, scoring = "accuracy", cv = kfold, n_jobs=4))

    cv_means = []
    cv_std = []
    for cv_result in results:
        cv_means.append(cv_result.mean())
        cv_std.append(cv_result.std())

    cv_res = pd.DataFrame({"CrossValMeans":cv_means,"CrossValerrors": cv_std,"Algorithm":["AdaBoostClassifier","BernoulliNB","DummyClassifier"]})`

如果你想集成这些

分别训练它们,并使用 HyperParams 为模型找到最佳估计器,然后使用 VotingClassifier 作为:

    DTC = DecisionTreeClassifier()
    ADB = AdaBoostClassifier(DTC)

    ada_param_grid = { # Params here }

    gsABC = GridSearchCV(ADB,param_grid = ada_param_grid , cv=kfold, scoring="accuracy", n_jobs= 4, verbose = 1)

    AdaBoost_best =gsABC.best_estimator_

 # Likewise you can do for others and then perform Voting

    votingC = VotingClassifier(estimators=[('ada', AdaBoost_best), ('nb', BernoulliNB_best),
    ('dc', DummyClassifier_best)], voting='soft', n_jobs=4)

    votingC = votingC.fit(X_train, Y_train)

【讨论】:

  • 谢谢。我需要迭代,因为它是为了生成报告的输出。如果您知道一种迭代分类器类的方法,那就太好了。
猜你喜欢
  • 2013-05-23
  • 2017-04-20
  • 2014-07-16
  • 2019-11-28
  • 2020-01-01
  • 2016-12-01
  • 2013-10-13
  • 2020-10-15
  • 2013-10-20
相关资源
最近更新 更多