【发布时间】:2017-09-25 06:32:09
【问题描述】:
我在 Java 中尝试使用 Spark 1.3.1 的 LDA 并收到此错误:
Error: application failed with exception
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.NumberFormatException: For input string: "��"
我的 .txt 文件如下所示: 放重 发现困难 引体向上 俯卧撑 现在 失明 疾病 一切 眼睛 都 工作 完美 除了 能够 采光 使用 光 形式 图像 榜样的孩子 亲爱的回忆童年最悲伤的回忆
这是代码:
import scala.Tuple2;
import org.apache.spark.api.java.*;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.mllib.clustering.LDAModel;
import org.apache.spark.mllib.clustering.LDA;
import org.apache.spark.mllib.linalg.Matrix;
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.linalg.Vectors;
import org.apache.spark.SparkConf;
public class JavaLDA {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("LDA Example");
JavaSparkContext sc = new JavaSparkContext(conf);
// Load and parse the data
String path = "/tutorial/input/askreddit20150801.txt";
JavaRDD<String> data = sc.textFile(path);
JavaRDD<Vector> parsedData = data.map(
new Function<String, Vector>() {
public Vector call(String s) {
String[] sarray = s.trim().split(" ");
double[] values = new double[sarray.length];
for (int i = 0; i < sarray.length; i++)
values[i] = Double.parseDouble(sarray[i]);
return Vectors.dense(values);
}
}
);
// Index documents with unique IDs
JavaPairRDD<Long, Vector> corpus = JavaPairRDD.fromJavaRDD(parsedData.zipWithIndex().map(
new Function<Tuple2<Vector, Long>, Tuple2<Long, Vector>>() {
public Tuple2<Long, Vector> call(Tuple2<Vector, Long> doc_id) {
return doc_id.swap();
}
}
));
corpus.cache();
// Cluster the documents into three topics using LDA
LDAModel ldaModel = new LDA().setK(100).run(corpus);
// Output topics. Each is a distribution over words (matching word count vectors)
System.out.println("Learned topics (as distributions over vocab of " + ldaModel.vocabSize()
+ " words):");
Matrix topics = ldaModel.topicsMatrix();
for (int topic = 0; topic < 100; topic++) {
System.out.print("Topic " + topic + ":");
for (int word = 0; word < ldaModel.vocabSize(); word++) {
System.out.print(" " + topics.apply(word, topic));
}
System.out.println();
}
ldaModel.save(sc.sc(), "myLDAModel");
}
}
有人知道为什么会这样吗?我只是第一次尝试 LDA Spark。谢谢。
【问题讨论】:
-
这与LDA无关!您正在尝试将字符串转换为数字。检查一下!
-
我从这里获取了代码。我只将 DistributedLDAModel 更改为 LDAModel spark.apache.org/docs/latest/mllib-clustering.html
-
这是什么意思?你真的读过你的错误信息吗? “java.lang.NumberFormatException:对于输入字符串:“��”“
-
我告诉你,这不是问题所在。问题在于您尝试解析的输入。
-
哦,我明白你的意思了。我以为你说我不应该将单词转换为频率。
标签: java apache-spark lda