【发布时间】:2016-03-15 03:54:44
【问题描述】:
我想用 mapWithState 函数 (Spark 1.6) 替换我的 updateStateByKey 函数,以提高我的程序的性能。
我正在关注这两个文件: https://databricks.com/blog/2016/02/01/faster-stateful-stream-processing-in-spark-streaming.html
https://docs.cloud.databricks.com/docs/spark/1.6/index.html#examples/Streaming%20mapWithState.html
但我收到错误 scala.MatchError: [Ljava.lang.Object]
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 71.0 failed 4 times, most recent failure: Lost task 0.3 in stage 71.0 (TID 88, ttsv-lab-vmdb-01.englab.juniper.net): scala.MatchError: [Ljava.lang.Object;@eaf8bc8 (of class [Ljava.lang.Object;)
at HbaseCovrageStream$$anonfun$HbaseCovrageStream$$tracketStateFunc$1$3.apply(HbaseCoverageStream_mapwithstate.scala:84)
at HbaseCovrageStream$$anonfun$HbaseCovrageStream$$tracketStateFunc$1$3.apply(HbaseCoverageStream_mapwithstate.scala:84)
at scala.Option.flatMap(Option.scala:170)
at HbaseCovrageStream$.HbaseCovrageStream$$tracketStateFunc$1(HbaseCoverageStream_mapwithstate.scala:84)
参考代码:
def trackStateFunc(key:String, value:Option[Array[Long]], current:State[Seq[Array[Long]]]):Option[Array[Long]] = {
/*adding current state to the previous state*/
val res = value.map(x => x +: current.getOption().get).orElse(current.getOption())
current.update(res.get)
res.flatMap {
case as: Seq[Array[Long]] => Try(as.map(BDV(_)).reduce(_ + _).toArray).toOption //throws match error
}
}
val statespec:StateSpec[String, Array[Long], Array[Long], Option[Array[Long]]] = StateSpec.function(trackStateFunc _)
val state: MapWithStateDStream[String, Array[Long], Array[Long], Option[Array[Long]]] = parsedStream.mapWithState(statespec)
我之前使用 updateStateByKey 函数的工作代码:
val state: DStream[(String, Array[Long])] = parsedStream.updateStateByKey(
(current: Seq[Array[Long]], prev: Option[Array[Long]]) => {
prev.map(_ +: current).orElse(Some(current))
.flatMap(as => Try(as.map(BDV(_)).reduce(_ + _).toArray).toOption)
})
【问题讨论】:
-
似乎在运行时您的对象与
AnyRef匹配。尝试添加日志语句以查看实际的运行时类型。
标签: scala apache-spark spark-streaming