【发布时间】:2019-03-06 22:09:08
【问题描述】:
我想知道是否可以在 Spark 上下文中向 Cassandra 写入 SimpleFeature?我正在尝试将我的数据的 SimpleFeatures 映射到 Spark RDD,但我遇到了一些问题。以下被调用的 createFeature() 函数在独立单元测试中工作正常,我有另一个单元测试调用它,并成功通过 GeoMesa api 使用它生成的 SimpleFeature 写入 Cassandra:
import org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator
. . .
private val sparkConf = new SparkConf(true).set("spark.cassandra.connection.host","localhost").set("spark.serializer","org.apache.spark.serializer.KryoSerializer").set("spark.kryo.registrator",classOf[GeoMesaSparkKryoRegistrator].getName).setAppName(appName).setMaster(master)
. . .
val rowsRDD = processedRDD.map(r => {
...
println("** NAME VALUE MAP **")
for ((k,v) <- featureNamesValues) printf("key: %s, value: %s\n", k, v)
val feature = MyGeoMesaManager.createFeature(featureTypeConfig.asJava,featureNamesValues.asJava)
feature
})
rowsRDD.print()
但是,我现在在 Spark 上下文中的 RDD 的 map() 函数中调用函数这一事实导致 SimpleFeatureImpl 上的序列化错误,原因是 Spark 分区:
18/02/12 08:00:46 ERROR Executor: Exception in task 0.0 in stage 19.0 (TID
9)
java.io.NotSerializableException: org.geotools.feature.simple.SimpleFeatureImpl
Serialization stack:
- object not serializable (class: org.geotools.feature.simple.SimpleFeatureImpl, value: SimpleFeatureImpl:myfeature=[SimpleFeatureImpl.Attribute: . . ., SimpleFeatureImpl.Attribute: . . .])
- element of array (index: 0)
- array (class [Lorg.opengis.feature.simple.SimpleFeature;, size 4)
好吧,然后我添加了 geomesa spark 核心页面上提到的 kyro 依赖项,以减轻这种情况,但是现在我在执行 map 函数时在 GeoMesaSparkKryoRegistrator 类上收到 NoClassDefFoundError,但是正如您所看到的 geomesa -spark-core 依赖存在于类路径中,我可以导入该类:
18/02/12 08:08:37 ERROR Executor: Exception in task 0.0 in stage 26.0 (TID
11)
java.lang.NoClassDefFoundError: Could not initialize class org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator$
at org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator$$anon$1.write(GeoMesaSparkKryoRegistrator.scala:36)
at org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator$$anon$1.write(GeoMesaSparkKryoRegistrator.scala:32)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:318)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:315)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:383)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
最后,我尝试将 com.esotericsoftware.kryo 依赖项添加到类路径中,但我得到了同样的错误。
是否有可能用 GeoMesa、Spark 和 Cassandra 做我想做的事情?感觉就像我在 1 码线上,但我不能完全进入。
【问题讨论】:
标签: apache-spark geomesa