【问题标题】:Why do all my Regressors show much lower Accuracy than all my Classifiers?为什么我所有的回归器显示的准确度都比我所有的分类器低得多?
【发布时间】:2020-06-24 18:12:45
【问题描述】:

我正在测试下面的一些示例代码。所有分类结果都非常合理(80% 或更多)。所有回归结果都很糟糕,而且非常不正常(大约 20%)。为什么会这样?我一定是做错了什么,但我看不到这里有什么问题。

import pandas as pd
import numpy as np

#reading the dataset
df=pd.read_csv("C:\\my_path\\train.csv")

#filling missing values
df['Gender'].fillna('Male', inplace=True)

df.fillna(0)
df.Loan_Status.replace(('Y', 'N'), (1, 0), inplace=True)

#split dataset into train and test

from sklearn.model_selection import train_test_split
train, test = train_test_split(df, test_size=0.3, random_state=0)

x_train=train.drop(['Loan_Status','Loan_ID'],axis=1)
y_train=train['Loan_Status']

x_test=test.drop(['Loan_Status','Loan_ID'],axis=1)
y_test=test['Loan_Status']

#create dummies
x_train=pd.get_dummies(x_train)
x_test=pd.get_dummies(x_test)


# Baggin Classifier
from sklearn.ensemble import BaggingClassifier
from sklearn import tree
model = BaggingClassifier(tree.DecisionTreeClassifier(random_state=1))
model.fit(x_train, y_train)
model.score(x_test,y_test)


# Bagging Regressor
from sklearn.ensemble import BaggingRegressor
model = BaggingRegressor(tree.DecisionTreeRegressor(random_state=1))
model.fit(x_train, y_train)
model.score(x_test,y_test)


# AdaBoostClassifier
from sklearn.ensemble import AdaBoostClassifier
model = AdaBoostClassifier(random_state=1)
model.fit(x_train, y_train)
model.score(x_test,y_test)


# AdaBoostRegressor
from sklearn.ensemble import AdaBoostRegressor
model = AdaBoostRegressor()
model.fit(x_train, y_train)
model.score(x_test,y_test)


# GradientBoostingClassifier
from sklearn.ensemble import GradientBoostingClassifier
model= GradientBoostingClassifier(learning_rate=0.01,random_state=1)
model.fit(x_train, y_train)
model.score(x_test,y_test)

# GradientBoostingRegressor
from sklearn.ensemble import GradientBoostingRegressor
model= GradientBoostingRegressor()
model.fit(x_train, y_train)
model.score(x_test,y_test)


# XGBClassifier
import xgboost as xgb
model=xgb.XGBClassifier(random_state=1,learning_rate=0.01)
model.fit(x_train, y_train)
model.score(x_test,y_test)


# XGBRegressor
import xgboost as xgb
model=xgb.XGBRegressor()
model.fit(x_train, y_train)
model.score(x_test,y_test)

示例数据来自以下链接。

https://www.kaggle.com/wendykan/lending-club-loan-data

最后,这是我所看到的一个小样本。

# Bagging Regressor
from sklearn.ensemble import BaggingRegressor
regressor = BaggingRegressor()
regressor.fit(x_train,y_train)
accuracy = regressor.score(x_test,y_test)
print(accuracy*100,'%')
# result:
13.022388059701505 %

from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(x_train,y_train)
accuracy = regressor.score(x_test,y_test)
print(accuracy*100,'%')
# result:
29.836209522493196 %

【问题讨论】:

    标签: python python-3.x machine-learning regression classification


    【解决方案1】:

    回归和分类是两个不同的任务。从您的代码看来,您似乎正在尝试使用与分类器相同的数据来拟合回归器。基本上,regresors 试图找到一个根据输入最好地猜测输出数的函数。所以目标值应该是来自连续空间的数字,而不是类别。例如,您可能希望根据借款人的借款金额来预测借款人的收入。

    查看this medium page 了解有关回归和分类之间差异的更多信息。

    【讨论】:

    • 啊,是的!您必须以与回归器算法不同的方式检查分类器算法的准确性。我刚刚更新了我原来的帖子。我相信,我正在正确地查看回归器的准确性。尽管如此,数字还是太低了。或者,考虑到输入模型的自变量,回归可能根本无法“学习”该数据集的因变量。这就是我想要弄清楚的。
    • 问题不在于测量精度等技术细节,而在于基本原理。只能从 1 中分类 0,不能建立回归来解释从 0 到 1 的增加;您只能对连续因变量进行回归。记住:二元因变量——分类;连续因变量 - 回归。
    • 是的,是的,是的。我以前没有想到,但现在它是有道理的。感谢您分享您的见解!
    • @ASH 如果答案解决了您的问题,请接受(见What should I do when someone answers my question?
    猜你喜欢
    • 2017-02-04
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多