【问题标题】:Do we need Lzocodec for groupBy function in Scala Spark?Scala Spark 中的 groupBy 函数是否需要 Lzocodec?
【发布时间】:2018-02-02 02:38:05
【问题描述】:

我在 Scala Spark 中做了一个函数,看起来像这样。

def prepareSequences(data: RDD[String], splitChar: Char = '\t') = {
    val x = data.map(line => {
    val Array(id, se, offset, hour) = line.split(splitChar)
    (id + "-" + se,
    Step(offset = if (offset == "NULL") {
    -5
    } else {
    offset.toInt
    },
    hour = hour.toInt))
    })

    val y = x.groupBy(_._1)}

我需要groupBy,但一旦我添加它,我就会收到错误消息。错误要求Lzocodec

        Exception in thread "main" java.lang.RuntimeException: Error in configuring object
    at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
    at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:188)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:201)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.Partitioner$$anonfun$defaultPartitioner$2.apply(Partitioner.scala:66)
    at org.apache.spark.Partitioner$$anonfun$defaultPartitioner$2.apply(Partitioner.scala:66)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.immutable.List.map(List.scala:285)
    at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:66)
    at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:687)
    at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:687)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.groupBy(RDD.scala:686)
    at com.savagebeast.mypackage.DataPreprocessing$.prepareSequences(DataPreprocessing.scala:42)
    at com.savagebeast.mypackage.activity_mapper$.main(activity_mapper.scala:31)
    at com.savagebeast.mypackage.activity_mapper.main(activity_mapper.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
    ... 44 more
    Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
    at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:139)
    at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:180)
    at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
    ... 49 more
    Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
    at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:132)
    ... 51 more

我在Class com.hadoop.compression.lzo.LzoCodec not found for Spark on CDH 5? 之后安装了lzo 和其他必需的东西

我错过了什么吗?

更新:找到解决方案。

像这样对 RDD 进行分区解决了这个问题。

val y = x.groupByKey(50)

50 是我想要的 RDD 的分区数。可以是任意数字。

但是,我不确定为什么会这样。如果有人能解释一下,将不胜感激。

UPDATE-2:以下工作更合理且到目前为止稳定。

我将hadoop-lzo-0.4.21-SNAPSHOT.jar/Users/&lt;username&gt;/hadoop-lzo/target 复制到/usr/local/Cellar/apache-spark/2.1.0/libexec/jars。本质上是将 jar 复制到 spark 的类路径。

【问题讨论】:

    标签: scala hadoop apache-spark


    【解决方案1】:

    没有。 groupBy 不需要它。如果您查看堆栈跟踪(发布它的荣誉),您会看到它在输入格式的某处失败:

    at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
    

    这表明您的输入已被压缩。当您调用groupBy 时它会失败,因为此时 Spark 必须决定分区数量并触摸输入。

    在实践中 - 是的,您似乎需要 lzo 编解码器来执行您的工作。

    【讨论】:

    • 谢谢! data.map 是否导致压缩?我是 Scala 新手。早些时候我对不同的问题做了类似的操作,效果很好。我不确定这里的哪个部分导致了压缩,以及是否可以修复它以使 groupBy 正常工作。
    • 我可以解决它(在问题中添加)。你知道为什么会这样吗?谢谢
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2018-03-08
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2013-10-22
    • 2019-05-15
    相关资源
    最近更新 更多