【发布时间】:2021-02-04 00:33:41
【问题描述】:
作为一项学习练习,我正在尝试手动编写代码(在 R 中),以将不同的机器学习模型“堆叠”(集成)在一起(目标是二进制响应分类)。我从 R 中获取了流行的“声纳”数据集:我首先获取一些训练数据并将其提供给“随机森林”算法以及“ada boost”算法。我从这两种算法中获取输出概率,然后将其提供给“xgboost”算法进行最终预测。出于某种原因,这导致模型的训练误差为 0。这不可能。
谁能告诉我我做错了什么以及如何解决这个问题?我在下面附上了我的代码。
library (mlbench)
library (randomForest)
library(ada)
library(xgboost)
library(caret)
数据(声纳)
index = createDataPartition(y=Sonar$Class, p=0.75, list=FALSE)
train_set = Sonar[index,]
test_set = Sonar[-index,]
########Fit Random Forest
model_rf = randomForest(Class~., train_set, mtry = 12, ntree=500, prob=TRUE)
model_rf
####### Fit ada model
model_ada = ada(train_set[,-61],train_set$Class, nu=0.01, iter = 100, type="discrete")
model_ada
######### Predict on train data
pred_train_rf = predict(model_rf,train_set[,-61], type="prob")
pred_train_ada = predict(model_ada,train_set[,-61], type="prob")
######### Append predicted probabilities to the trainset : for class "M"
train_set$pred_rf = pred_train_rf[,1]
train_set$pred_ada = pred_train_ada[,1]
############# Fit xgboost model on the predicted probabilities of earlier two models
data_matrix <- as.matrix(train_set[,c(62:63)])
output_vector = as.vector(ifelse(train_set$Class == "M",1,0))
model_xgboost <- xgboost(data = data_matrix, label = output_vector, max.depth = 2,
eta = 1, nthread = 2, nrounds = 10,objective = "binary:logistic")
#########################################
[1] train-error:0.000000
[2] train-error:0.000000
[3] train-error:0.000000
[4] train-error:0.000000
[5] train-error:0.000000
[6] train-error:0.000000
[7] train-error:0.000000
[8] train-error:0.000000
[9] train-error:0.000000
[10] train-error:0.000000
谢谢。
【问题讨论】:
-
请让您的示例可重现。
data(Sonar)会报错 -
@Robert Wilson:现在可以用了吗?
标签: r machine-learning random-forest xgboost ensemble-learning