【发布时间】:2021-04-13 12:09:45
【问题描述】:
我的数据集不平衡,我应用RandomOverSampler 来获得平衡的数据集。
oversample = RandomOverSampler(sampling_strategy='minority')
X_over, y_over = oversample.fit_resample(X, y)
之后我遵循了这个 kaggle post RandomForest implementation for feature selection
https://www.kaggle.com/gunesevitan/titanic-advanced-feature-engineering-tutorial(转到页面底部你会看到类似的实现。)
我有一个类似于泰坦尼克号的真实数据集:) 并试图从中获取特征重要性!
我遇到的问题是,尽管分类器准确度非常高 ~0.99%,但我得到的特征重要性在 ~0.1% 左右。是什么原因造成的?还是可以?
这是我正在使用的代码,类似于我在链接中提供的代码。转到页面底部。
classifiers = [RandomForestClassifier(random_state=SEED,
criterion='gini',
n_estimators=20,
bootstrap=True,
max_depth=5,
n_jobs=-1)]
#DecisionTreeClassifier(),
#LogisticRegression(),
#KNeighborsClassifier()]
#GradientBoostingClassifier(),
#SVC(probability=True), GaussianNB()]
log_cols = ["Classifier", "Accuracy"]
log = pd.DataFrame(columns=log_cols)
SEED = 42
N = 15
skf = StratifiedKFold(n_splits=N, random_state=None, shuffle=True)
importances = pd.DataFrame(np.zeros((X.shape[1], N)), columns=['Fold_{}'.format(i) for i in range(1, N + 1)], index=data.columns)
acc_dict = {}
for fold, (train_index, test_index) in enumerate(skf.split(X_over, y_over)):
X_train, X_test = X_over[train_index], X_over[test_index]
y_train, y_test = y_over[train_index], y_over[test_index]
for clf in classifiers:
#pipe1=make_pipeline(sampling,clf)
print(clf)
name = clf.__class__.__name__
clf.fit(X_train, y_train)
train_predictions = clf.predict(X_test)
acc = accuracy_score(y_test, train_predictions)
if 'Random' in name:
importances.iloc[:, fold - 1] = clf.feature_importances_
if name in acc_dict:
acc_dict[name] += acc
else:
acc_dict[name] = acc
#doing grid search for best input parameters for RF
#CV_rfc = GridSearchCV(estimator=clf, param_grid=param_grid, cv= 5)
#CV_rfc.fit(X_train, y_train)
for clf in acc_dict:
acc_dict[clf] = acc_dict[clf] / 10.0
log_entry = pd.DataFrame([[clf, acc_dict[clf]]], columns=log_cols)
log = log.append(log_entry)
我得到的特征重要性值几乎相同,最好是 ~0.1%
通过@AlexSerraMarrugat 建议的混淆矩阵检查
编辑
测试:0.9926166568222091 火车:0.9999704661911724
编辑2
之后尝试了randomoversplit:
from imblearn.over_sampling import RandomOverSampler
oversample = RandomOverSampler(sampling_strategy='minority')
x_over, y_over = oversample.fit_resample(X_train,Y_train)
# summarize class distribution
print(Counter(y_over))
print(len(x_over))
#创建混淆矩阵
from sklearn.metrics import plot_confusion_matrix
clf = RandomForestClassifier(random_state=0) #Here change the hyperparameters
clf.fit(x_over, y_over)
predict_y=clf.predict(x_test)
plot_confusion_matrix(clf, x_test, y_test, cmap=plt.cm.Blues)
print("Test: ", clf.score(x_test, y_test))
print("Train: ", clf.score(x_over, y_over))
测试:0.9926757235676315 火车:1.0
编辑3 训练数据的混淆矩阵
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(clf, X_train, Y_train, cmap=plt.cm.Blues)
print("Train: ", clf.score(X_train, Y_train))
【问题讨论】:
-
我可以向你保证,在拆分为训练和验证之前过采样是不正确的。您应该首先拆分,然后仅对您的训练数据进行过采样。这样做是为了模拟您的算法在现实世界中的使用——您不会对要预测 irl 的数据进行过度采样。解释了可疑的高准确度。
-
@GaussianPrior 感谢您的澄清。
from sklearn.model_selection import train_test_split X_train, x_test, Y_train,y_test = train_test_split(X,y, test_size = 0.2)如果先拆分并通过执行from imblearn.over_sampling import RandomOverSampler oversample = RandomOverSampler(sampling_strategy='minority') x_over, y_over = oversample.fit_resample(X_train,Y_train)执行oversampling,然后执行此操作clf.fit(x_over, y_over),则精度会从 %99 下降到 %0.1。 -
等等什么?从 99% 到 10% 还是从 99% 到 0.1%?你有几节课?
-
@GaussianPrior 当我拆分训练和测试数据集 (0.2) 时,我在测试数据集中得到 16k 0 和 300 1。
标签: python machine-learning scikit-learn random-forest feature-selection