【发布时间】:2017-03-26 14:24:38
【问题描述】:
我想比较 adaboost 和决策树。作为原则证明,我将adaboost 中的估计器数量设置为1,并使用决策树分类器作为默认值,期望与简单决策树相同的结果。
我在预测我的测试标签时确实得到了同样的准确度。但是,adaboost 的拟合时间要短得多,而测试时间要长一些。 Adaboost 似乎使用与DecisionTreeClassifier 相同的默认设置,否则,精度不会完全相同。
谁能解释一下?
代码
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
print("creating classifier")
clf = AdaBoostClassifier(n_estimators = 1)
clf2 = DecisionTreeClassifier()
print("starting to fit")
time0 = time()
clf.fit(features_train,labels_train) #fit adaboost
fitting_time = time() - time0
print("time for fitting adaboost was", fitting_time)
time0 = time()
clf2.fit(features_train,labels_train) #fit dtree
fitting_time = time() - time0
print("time for fitting dtree was", fitting_time)
time1 = time()
pred = clf.predict(features_test) #test adaboost
test_time = time() - time1
print("time for testing adaboost was", test_time)
time1 = time()
pred = clf2.predict(features_test) #test dtree
test_time = time() - time1
print("time for testing dtree was", test_time)
accuracy_ada = accuracy_score(pred, labels_test) #acc ada
print("accuracy for adaboost is", accuracy_ada)
accuracy_dt = accuracy_score(pred, labels_test) #acc dtree
print("accuracy for dtree is", accuracy_dt)
输出
('time for fitting adaboost was', 3.8290421962738037)
('time for fitting dtree was', 85.19442415237427)
('time for testing adaboost was', 0.1834099292755127)
('time for testing dtree was', 0.056527137756347656)
('accuracy for adaboost is', 0.99089874857792948)
('accuracy for dtree is', 0.99089874857792948)
【问题讨论】:
-
features_train的维度是多少?当我用 100 个 3 维样本重复你的实验时,决策树的速度大约是 Adaboost 的 10 倍。 -
另外,尝试使用分析器。 IPython 的魔法
%prun是一个不错的选择。 -
Features_train 有 3785 个样本的 16000 个特征。我对这两者之间的概念差异很感兴趣。他们采用的算法有何不同?我希望带有 1 个估算器的 AdaBoostClassifier 能完全做到 DecisionTreeClassifier 所做的事情。
标签: python machine-learning scikit-learn decision-tree adaboost