【发布时间】:2016-03-11 11:09:34
【问题描述】:
我的分类模型的准确率非常低。即使我使用邻居 = 1 的 K-Nearest Neighbors 模型,该模型仍然有很多错误。 logreg 模型具有最高的准确度,它只是为每个样本预测 0。我是 ML 的新手,并试图找出我做错了什么。如何改进模型?
输入:
# load the CSV file as a numpy matrix
dataset = np.loadtxt(raw_data, delimiter=",")
target = np.loadtxt(target_data, delimiter=",")
# separate the data from the target attributes
X = dataset[:,0:6]
y = target[:]
print X.shape
print y.shape
#print X
#print y
knn = KNeighborsClassifier(n_neighbors=1)
print knn
knn.fit(X,y)
result = knn.predict(X)
print metrics.accuracy_score(y, result)
knn = KNeighborsClassifier(n_neighbors=5)
print knn
knn.fit(X,y)
result = knn.predict(X)
print metrics.accuracy_score(y, result)
logreg = LogisticRegression()
print logreg
logreg.fit(X, y)
result = logreg.predict(X)
#every prediction is 0
print metrics.accuracy_score(y, result)
输出:
tshelley@tshelley-Ubuntu:~/Dev/Enterprise-Project$ python loadcsv.py
(700, 6)
(700,)
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=1, p=2,
weights='uniform')
0.674285714286
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=5, p=2,
weights='uniform')
0.675714285714
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
0.72
【问题讨论】:
标签: python scikit-learn classification logistic-regression nearest-neighbor