【发布时间】:2019-02-21 15:24:08
【问题描述】:
我正在尝试使用 Spark 汇总统计信息,如下所述:https://spark.apache.org/docs/1.1.0/mllib-statistics.html
根据 Spark 文档:
import org.apache.spark.mllib.linalg.Vector
import org.apache.spark.mllib.stat.{MultivariateStatisticalSummary, Statistics}
import org.apache.spark.mllib.linalg.DenseVector
val observations: RDD[Vector] = ... // an RDD of Vectors
// Compute column summary statistics.
val summary: MultivariateStatisticalSummary = Statistics.colStats(observations)
我在构建 observations:RDD[Vector] 对象时遇到问题。我试试:
scala> val data:Array[Double] = Array(1, 2, 3, 4, 5)
data: Array[Double] = Array(1.0, 2.0, 3.0, 4.0, 5.0)
scala> val v = new DenseVector(data)
v: org.apache.spark.mllib.linalg.DenseVector = [1.0,2.0,3.0,4.0,5.0]
scala> val observations = sc.parallelize(Array(v))
observations: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.DenseVector] = ParallelCollectionRDD[3] at parallelize at <console>:19
scala> val summary: MultivariateStatisticalSummary = Statistics.colStats(observations)
<console>:21: error: type mismatch;
found : org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.DenseVector]
required: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]
Note: org.apache.spark.mllib.linalg.DenseVector <: org.apache.spark.mllib.linalg.Vector, but class RDD is invariant in type T.
You may wish to define T as +T instead. (SLS 4.5)
val summary: MultivariateStatisticalSummary = Statistics.colStats(observations)
问题:
1) 我应该如何将 DenseVector 转换为 Vector?
2) 在实际程序中,而不是双精度数组,我可以使用以下方法获取从 RDD 获得的集合的统计信息:
def countByKey(): Map[K, Long]
//Count the number of elements for each key, and return the result to the master as a Map.
所以我必须这样做:
myRdd.countByKey().values.map(_.toDouble)
这没有多大意义,因为我现在不得不使用常规的 Scala 集合,而不是使用 RDD,这些集合有时会停止适应内存。 Spark 分布式计算的所有优势都丧失了。
如何以可扩展的方式解决这个问题?
更新
在我的情况下,我有:
val cnts: org.apache.spark.rdd.RDD[Int] = prodCntByCity.map(_._2) // get product counts only
val doubleCnts: org.apache.spark.rdd.RDD[Double] = cnts.map(_.toDouble)
如何将doubleCnts转换成observations: RDD[Vector]?
【问题讨论】:
标签: scala apache-spark