【发布时间】:2018-11-18 14:40:31
【问题描述】:
考虑这个简单的例子
dtrain <- data_frame(text = c("Chinese Beijing Chinese",
"Chinese Chinese Shanghai",
"Chinese Macao",
"Tokyo Japan Chinese"),
doc_id = 1:4,
class = c(1, 1, 1, 0))
dtrain_spark <- copy_to(sc, dtrain, overwrite = TRUE)
> dtrain_spark
# Source: table<dtrain> [?? x 3]
# Database: spark_connection
text doc_id class
<chr> <int> <dbl>
1 Chinese Beijing Chinese 1 1
2 Chinese Chinese Shanghai 2 1
3 Chinese Macao 3 1
4 Tokyo Japan Chinese 4 0
我可以使用以下pipeline 轻松训练decision_tree_classifier
pipeline <- ml_pipeline(
ft_tokenizer(sc, input.col = "text", output.col = "tokens"),
ft_count_vectorizer(sc, input_col = 'tokens', output_col = 'myvocab'),
ml_decision_tree_classifier(sc, label_col = "class",
features_col = "myvocab",
prediction_col = "pcol",
probability_col = "prcol",
raw_prediction_col = "rpcol")
)
model <- ml_fit(pipeline, dtrain_spark)
现在的问题是我无法以有意义的方式提取 feature_importances。
跑步
> ml_stage(model, 'decision_tree_classifier')$feature_importances
[1] 0 0 1 0 0 0
但我想要的是tokens!在我的现实生活示例中,我有成千上万个这样的例子,并且表明很难理解任何事情。
有没有办法从上面的矩阵表示中退出tokens?
谢谢!
【问题讨论】:
标签: r apache-spark apache-spark-mllib apache-spark-ml sparklyr