【发布时间】:2021-05-21 23:19:44
【问题描述】:
我一直在波士顿数据集上试验 RFECV。
到目前为止,我的理解是,为了防止数据泄漏,重要的是只对训练数据而不是整个数据集执行此类活动。
我只对训练数据执行了 RFECV,结果表明 14 个特征中有 13 个是最优的。然而,我随后在整个数据集上运行了相同的过程,这一次,它表明只有 6 个特征是最优的——这似乎更有可能。
举例说明:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import RFECV
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
### CONSTANTS
TARGET_COLUMN = 'Price'
TEST_SIZE = 0.1
RANDOM_STATE = 0
### LOAD THE DATA AND ASSIGN TO X and y
data_dict = load_boston()
data = data_dict.data
features = list(data_dict.feature_names)
target = data_dict.target
df = pd.DataFrame(data=data, columns=features)
df[TARGET_COLUMN] = target
X = df[features]
y = df[TARGET_COLUMN]
### PERFORM TRAIN TEST SPLIT
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE,
random_state=RANDOM_STATE)
#### DETERMINE THE DATA THAT IS PASSED TO RFECV
## Just the Training data
X_input = X_train
y_input = y_train
## All the data
# X_input = X
# y_input = y
### IMPLEMENT RFECV AND GET RESULTS
rfecv = RFECV(estimator=LinearRegression(), step=1, scoring='neg_mean_squared_error')
rfecv.fit(X_input, y_input)
rfecv.transform(X_input)
print(f'Optimal number of features: {rfecv.n_features_}')
imp_feats = X.drop(X.columns[np.where(rfecv.support_ == False)[0]], axis=1)
print('Important features:', list(imp_feats.columns))
运行上述将导致:
Optimal number of features: 13
Important features: ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT']
现在,如果更改代码以使 RFECV 适合所有数据:
#### DETERMINE THE DATA THAT IS PASSED TO RFECV
## Just the Training data
# X_input = X_train # NOW COMMENTED OUT
# y_input = y_train # NOW COMMENTED OUT
## All the data
X_input = X # NOW UN-COMMENTED
y_input = y # NOW UN-COMMENTED
并运行它,我得到以下结果:
Optimal number of features: 6
Important features: ['CHAS', 'NOX', 'RM', 'DIS', 'PTRATIO', 'LSTAT']
我不明白为什么整个数据集的结果与仅训练集的结果如此明显不同(而且看起来更准确)。
我已经尝试通过使 test_size 非常小(通过我的 TEST_SIZE 常量)使训练集接近整个数据的大小,但我仍然得到这个看似不太可能的差异。
我错过了什么?
【问题讨论】:
标签: machine-learning scikit-learn feature-selection