以下是我机器上 scikit-learn 文档分类示例的一些时间安排(Python 2.7、NumPy 1.8.2、SciPy 0.13.3、scikit-learn 0.15.2、英特尔酷睿 i7-3540M 笔记本电脑使用电池供电)。数据集是 20 个新闻组;我已经对输出进行了相当多的调整。
$ python examples/document_classification_20newsgroups.py --all_categories
data loaded
11314 documents - 22.055MB (training set)
7532 documents - 13.801MB (test set)
20 categories
Extracting features from the training dataset using a sparse vectorizer
done in 2.849053s at 7.741MB/s
n_samples: 11314, n_features: 129792
Extracting features from the test dataset using the same vectorizer
done in 1.526641s at 9.040MB/s
n_samples: 7532, n_features: 129792
________________________________________________________________________________
Training:
LinearSVC(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, loss='l2', multi_class='ovr', penalty='l2',
random_state=None, tol=0.001, verbose=0)
train time: 5.274s
test time: 0.033s
f1-score: 0.860
dimensionality: 129792
density: 1.000000
________________________________________________________________________________
Training:
SGDClassifier(alpha=0.0001, class_weight=None, epsilon=0.1, eta0=0.0,
fit_intercept=True, l1_ratio=0.15, learning_rate='optimal',
loss='hinge', n_iter=50, n_jobs=1, penalty='l2', power_t=0.5,
random_state=None, shuffle=False, verbose=0, warm_start=False)
train time: 3.521s
test time: 0.038s
f1-score: 0.857
dimensionality: 129792
density: 0.390184
________________________________________________________________________________
Training:
MultinomialNB(alpha=0.01, class_prior=None, fit_prior=True)
train time: 0.161s
test time: 0.036s
f1-score: 0.836
dimensionality: 129792
density: 1.000000
________________________________________________________________________________
Training:
BernoulliNB(alpha=0.01, binarize=0.0, class_prior=None, fit_prior=True)
train time: 0.167s
test time: 0.153s
f1-score: 0.761
dimensionality: 129792
density: 1.000000
未显示数据集加载的时间,但时间不超过半秒;输入是一个包含文本的压缩文件。 “提取特征”包括标记化和停用词过滤。所以总的来说,我可以在 5 秒内加载 18.8k 个文档并在其中的 11k 个文档上训练一个朴素贝叶斯分类器,或者在 10 秒内训练一个 SVM。这意味着要解决一个 20×130k 维度的优化问题。
我建议你在你的机器上重新运行这个例子,因为实际花费的时间取决于很多因素,包括磁盘的速度。
[免责声明:我是 scikit-learn 开发人员之一。]