【问题标题】:Recursive feature elimination on Random Forest using scikit-learn使用 scikit-learn 对随机森林进行递归特征消除
【发布时间】:2014-07-30 04:24:08
【问题描述】:

我正在尝试使用 scikit-learn 和随机森林分类器执行递归特征消除,并使用 OOB ROC 作为对递归过程中创建的每个子集进行评分的方法。

但是,当我尝试使用 RFECV 方法时,我收到错误消息 AttributeError: 'RandomForestClassifier' object has no attribute 'coef_'

随机森林本身没有系数,但它们确实有按基尼分数进行的排名。所以,我想知道如何解决这个问题。

请注意,我想使用一种方法来明确告诉我在最佳分组中选择了我的pandas DataFrame 中的哪些特征,因为我正在使用递归特征选择来尽量减少我将输入的数据量最后的分类器。

下面是一些示例代码:

from sklearn import datasets
import pandas as pd
from pandas import Series
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import RFECV

iris = datasets.load_iris()
x=pd.DataFrame(iris.data, columns=['var1','var2','var3', 'var4'])
y=pd.Series(iris.target, name='target')
rf = RandomForestClassifier(n_estimators=500, min_samples_leaf=5, n_jobs=-1)
rfecv = RFECV(estimator=rf, step=1, cv=10, scoring='ROC', verbose=2)
selector=rfecv.fit(x, y)

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/bbalin/anaconda/lib/python2.7/site-packages/sklearn/feature_selection/rfe.py", line 336, in fit
    ranking_ = rfe.fit(X_train, y_train).ranking_
  File "/Users/bbalin/anaconda/lib/python2.7/site-packages/sklearn/feature_selection/rfe.py", line 148, in fit
    if estimator.coef_.ndim > 1:
AttributeError: 'RandomForestClassifier' object has no attribute 'coef_'

【问题讨论】:

  • 另一种方法是在调用predictpredict_proba 之后使用feature_importances_ 属性,这将按照传递的顺序返回一个百分比数组。见online example
  • 看到了;不过,我想知道是否有什么东西可以让我进行 10 倍验证并确定最优的特征子集。
  • 我不得不做一些类似的事情,但我通过对特征重要性进行排序然后一次修剪 1,3 或 5 个特征来手动完成。我没有用你的方法我不得不说所以我不知道是否可以做到。
  • 你能分享你的手动方法吗?
  • 我明天早上会发布我的代码,我的代码在我的工作电脑上,所以大约在英国夏令时早上 8 点左右

标签: python pandas scikit-learn random-forest feature-selection


【解决方案1】:

我提交了添加coef_ 的请求,因此RandomForestClassifier 可以与RFECV 一起使用。然而,改变已经发生了。此更改将在 0.17 版中进行。

https://github.com/scikit-learn/scikit-learn/issues/4945

如果您想立即使用,可以拉取最新的开发版本。

【讨论】:

    【解决方案2】:

    这是我为使 RandomForestClassifier 适应 RFECV 所做的工作:

    class RandomForestClassifierWithCoef(RandomForestClassifier):
        def fit(self, *args, **kwargs):
            super(RandomForestClassifierWithCoef, self).fit(*args, **kwargs)
            self.coef_ = self.feature_importances_
    

    如果您使用“准确度”或“f1”分数,只需使用此类即可。对于“roc_auc”,RFECV 抱怨不支持多类格式。使用下面的代码将其更改为二分类,“roc_auc”评分有效。 (使用 Python 3.4.1 和 scikit-learn 0.15.1)

    y=(pd.Series(iris.target, name='target')==2).astype(int)
    

    插入您的代码:

    from sklearn import datasets
    import pandas as pd
    from pandas import Series
    from sklearn.ensemble import RandomForestClassifier
    from sklearn.feature_selection import RFECV
    
    class RandomForestClassifierWithCoef(RandomForestClassifier):
        def fit(self, *args, **kwargs):
            super(RandomForestClassifierWithCoef, self).fit(*args, **kwargs)
            self.coef_ = self.feature_importances_
    
    iris = datasets.load_iris()
    x=pd.DataFrame(iris.data, columns=['var1','var2','var3', 'var4'])
    y=(pd.Series(iris.target, name='target')==2).astype(int)
    rf = RandomForestClassifierWithCoef(n_estimators=500, min_samples_leaf=5, n_jobs=-1)
    rfecv = RFECV(estimator=rf, step=1, cv=2, scoring='roc_auc', verbose=2)
    selector=rfecv.fit(x, y)
    

    【讨论】:

      【解决方案3】:

      这就是我想出来的。这是一个非常简单的解决方案,并且依赖于自定义准确度指标(称为 weightedAccuracy),因为我正在对高度不平衡的数据集进行分类。但是,如果需要,它应该更容易扩展。

      from sklearn import datasets
      import pandas
      from sklearn.ensemble import RandomForestClassifier
      from sklearn import cross_validation
      from sklearn.metrics import confusion_matrix
      
      
      def get_enhanced_confusion_matrix(actuals, predictions, labels):
          """"enhances confusion_matrix by adding sensivity and specificity metrics"""
          cm = confusion_matrix(actuals, predictions, labels = labels)
          sensitivity = float(cm[1][1]) / float(cm[1][0]+cm[1][1])
          specificity = float(cm[0][0]) / float(cm[0][0]+cm[0][1])
          weightedAccuracy = (sensitivity * 0.9) + (specificity * 0.1)
          return cm, sensitivity, specificity, weightedAccuracy
      
      iris = datasets.load_iris()
      x=pandas.DataFrame(iris.data, columns=['var1','var2','var3', 'var4'])
      y=pandas.Series(iris.target, name='target')
      
      response, _  = pandas.factorize(y)
      
      xTrain, xTest, yTrain, yTest = cross_validation.train_test_split(x, response, test_size = .25, random_state = 36583)
      print "building the first forest"
      rf = RandomForestClassifier(n_estimators = 500, min_samples_split = 2, n_jobs = -1, verbose = 1)
      rf.fit(xTrain, yTrain)
      importances = pandas.DataFrame({'name':x.columns,'imp':rf.feature_importances_
                                      }).sort(['imp'], ascending = False).reset_index(drop = True)
      
      cm, sensitivity, specificity, weightedAccuracy = get_enhanced_confusion_matrix(yTest, rf.predict(xTest), [0,1])
      numFeatures = len(x.columns)
      
      rfeMatrix = pandas.DataFrame({'numFeatures':[numFeatures], 
                                    'weightedAccuracy':[weightedAccuracy], 
                                    'sensitivity':[sensitivity], 
                                    'specificity':[specificity]})
      
      print "running RFE on  %d features"%numFeatures
      
      for i in range(1,numFeatures,1):
          varsUsed = importances['name'][0:i]
          print "now using %d of %s features"%(len(varsUsed), numFeatures)
          xTrain, xTest, yTrain, yTest = cross_validation.train_test_split(x[varsUsed], response, test_size = .25)
          rf = RandomForestClassifier(n_estimators = 500, min_samples_split = 2,
                                      n_jobs = -1, verbose = 1)
          rf.fit(xTrain, yTrain)
          cm, sensitivity, specificity, weightedAccuracy = get_enhanced_confusion_matrix(yTest, rf.predict(xTest), [0,1])
          print("\n"+str(cm))
          print('the sensitivity is %d percent'%(sensitivity * 100))
          print('the specificity is %d percent'%(specificity * 100))
          print('the weighted accuracy is %d percent'%(weightedAccuracy * 100))
          rfeMatrix = rfeMatrix.append(
                                      pandas.DataFrame({'numFeatures':[len(varsUsed)], 
                                      'weightedAccuracy':[weightedAccuracy], 
                                      'sensitivity':[sensitivity], 
                                      'specificity':[specificity]}), ignore_index = True)    
      print("\n"+str(rfeMatrix))    
      maxAccuracy = rfeMatrix.weightedAccuracy.max()
      maxAccuracyFeatures = min(rfeMatrix.numFeatures[rfeMatrix.weightedAccuracy == maxAccuracy])
      featuresUsed = importances['name'][0:maxAccuracyFeatures].tolist()
      
      print "the final features used are %s"%featuresUsed
      

      【讨论】:

        【解决方案4】:

        这是我的代码,我已对其进行了一些整理以使其与您的任务相关:

        features_to_use = fea_cols #  this is a list of features
        # empty dataframe
        trim_5_df = DataFrame(columns=features_to_use)
        run=1
        # this will remove the 5 worst features determined by their feature importance computed by the RF classifier
        while len(features_to_use)>6:
            print('number of features:%d' % (len(features_to_use)))
            # build the classifier
            clf = RandomForestClassifier(n_estimators=1000, random_state=0, n_jobs=-1)
            # train the classifier
            clf.fit(train[features_to_use], train['OpenStatusMod'].values)
            print('classifier score: %f\n' % clf.score(train[features_to_use], df['OpenStatusMod'].values))
            # predict the class and print the classification report, f1 micro, f1 macro score
            pred = clf.predict(test[features_to_use])
            print(classification_report(test['OpenStatusMod'].values, pred, target_names=status_labels))
            print('micro score: ')
            print(metrics.precision_recall_fscore_support(test['OpenStatusMod'].values, pred, average='micro'))
            print('macro score:\n')
            print(metrics.precision_recall_fscore_support(test['OpenStatusMod'].values, pred, average='macro'))
            # predict the class probabilities
            probs = clf.predict_proba(test[features_to_use])
            # rescale the priors
            new_probs = kf.cap_and_update_priors(priors, probs, private_priors, 0.001)
            # calculate logloss with the rescaled probabilities
            print('log loss: %f\n' % log_loss(test['OpenStatusMod'].values, new_probs))
            row={}
            if hasattr(clf, "feature_importances_"):
                # sort the features by importance
                sorted_idx = np.argsort(clf.feature_importances_)
                # reverse the order so it is descending
                sorted_idx = sorted_idx[::-1]
                # add to dataframe
                row['num_features'] = len(features_to_use)
                row['features_used'] = ','.join(features_to_use)
                # trim the worst 5
                sorted_idx = sorted_idx[: -5]
                # swap the features list with the trimmed features
                temp = features_to_use
                features_to_use=[]
                for feat in sorted_idx:
                    features_to_use.append(temp[feat])
                # add the logloss performance
                row['logloss']=[log_loss(test['OpenStatusMod'].values, new_probs)]
            print('')
            # add the row to the dataframe
            trim_5_df = trim_5_df.append(DataFrame(row))
        run +=1
        

        所以我在这里做的是我有一个我想要训练然后预测的特征列表,使用特征重要性,然后修剪最差的 5 个并重复。在每次运行期间,我都会添加一行来记录预测性能,以便稍后进行分析。

        原始代码要大得多,我分析了不同的分类器和数据集,但我希望你能从上面得到图片。我注意到的是,对于随机森林,我在每次运行中删除的特征数量会影响性能,因此一次修剪 1、3 和 5 个特征会产生一组不同的最佳特征。

        我发现使用 GradientBoostingClassifer 更具可预测性和可重复性,因为无论我一次修剪 1 个特征还是 3 个或 5 个特征,最终一组最佳特征都一致。

        我希望我不是在这里教你吸蛋,你可能比我知道的更多,但我对烧蚀分析的方法是使用快速分类器来粗略了解最佳特征集,然后使用性能更好的分类器,然后开始超参数调整,再次进行粗粒度比较,然后在我感觉最好的参数是什么后进行细粒度。

        【讨论】:

          猜你喜欢
          • 2022-11-05
          • 2015-12-16
          • 2015-09-28
          • 2013-06-07
          • 2015-03-28
          • 2020-06-05
          • 2021-01-22
          • 2014-11-12
          • 2016-06-02
          相关资源
          最近更新 更多