【发布时间】:2016-06-25 19:03:11
【问题描述】:
我刚刚在 CentOS 上安装了 sparkR 1.6.1 并且没有使用 hadoop。我用离散的“目标”值对数据建模的代码如下:
# 'tr' is a R data frame with 104 numeric columns and one TARGET column
# TARGET column is either 0 or 1
# Convert 'tr' to spark data frame
train <- createDataFrame(sqlContext, tr)
# test is an R dataframe without TARGET column
# Convert 'test' to spark Data frame
te<-createDataFrame(sqlContext,test)
# Using sparkR's glm model to model data
model <- glm(TARGET ~ . , data = train, family = "binomial")
# Make predictions
predictions <- predict(model, newData = te )
我能够如下评价成功或失败(希望我是正确的):
modelPrediction <- select(predictions, "prediction")
head(modelPrediction)
prediction
1 0
2 0
3 0
4 0
5 0
6 0
但是当我想评估概率时,我得到的结果如下:
modelPrediction <- select(predictions, "probability")
head(modelPrediction)
probability
1 <environment: 0x6188e1c0>
2 <environment: 0x61894b88>
3 <environment: 0x6189a620>
4 <environment: 0x618a00b8>
5 <environment: 0x618a5b50>
6 <environment: 0x618ac550>
请帮助我获取测试事件的概率值。谢谢。
【问题讨论】:
-
请把head(prediction)的结果包括进来
标签: sparkr