首先,如果数据大小合理,您可以尝试执行GridSearch,因为显然您正在处理文本,请考虑以下示例::
def main():
pipeline = Pipeline([
('vect', TfidfVectorizer(ngram_range=(2,2), min_df=1)),
('clf',SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=1e-3, kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False))
])
parameters = {
'vect__max_df': (0.25, 0.5),
'vect__use_idf': (True, False),
'clf__C': [1, 10, 100, 1000],
}
X, y = X, Y.as_matrix()
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.5)
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
if __name__ == '__main__':
main()
请注意,我使用 tf-idf 对我的数据(文本)进行了矢量化处理。 scikit-learn 项目也实现了RandomizedSearchCV。最后,还有其他有趣的工具,例如Tpot 项目使用基因编程,希望对您有所帮助!