我建议您最好为此目的使用随机森林 - 随机森林包含许多基于预测变量子集建模的树。
然后,您只需使用RandomForestVariableName.estimators_即可查看模型中已使用的random_states
我将在这里使用我的代码作为示例:
with open('C:\Users\Saskia Hill\Desktop\Exported\FinalSpreadsheet.csv', 'rb') as csvfile:
titanic_reader = csv.reader(csvfile, delimiter=',', quotechar='"')
row = titanic_reader.next()
feature_names = np.array(row)
# Load dataset, and target classes
titanic_X, titanic_y = [], []
for row in titanic_reader:
titanic_X.append(row)
titanic_y.append(row[11]) # The target values are your class labels
titanic_X = np.array(titanic_X)
titanic_y = np.array(titanic_y)
print titanic_X, titanic_y
print feature_names, titanic_X[0], titanic_y[0]
titanic_X = titanic_X[:, [2,3,4,5,6,7,8,9,10]] #these are your predictors/ features
feature_names = feature_names[[2,3,4,5,6,7,8,9,10]]
from sklearn import tree
rfclf = RandomForestClassifier(criterion='entropy', min_samples_leaf=1, max_features='auto', max_leaf_nodes=None, verbose=0)
rfclf = rfclf.fit(titanic_X,titanic_y)
rfclf.estimators_ #the output for this is pasted below:
[DecisionTreeClassifier(compute_importances=None, criterion='entropy',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_density=None, min_samples_leaf=1, min_samples_split=2,
random_state=1490702865, splitter='best'),
DecisionTreeClassifier(compute_importances=None, criterion='entropy',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_density=None, min_samples_leaf=1, min_samples_split=2,
random_state=174216030, splitter='best') ......
因此,随机森林将随机性引入到决策树文件中,并且不需要对决策树使用的初始数据进行调整,但它们充当交叉验证的方法,让您对数据的准确性更有信心(特别是如果像我一样,你有一个小数据集)。