【问题标题】:Mapping cassandra row to parametrized type in Spark RDD将 cassandra 行映射到 Spark RDD 中的参数化类型
【发布时间】:2016-07-14 07:12:33
【问题描述】:

我正在尝试使用 spark-cassandra-connector 将 cassandra 行映射到参数化类型。我一直在尝试使用隐式定义的 columnMapper 来定义映射,因此:

class Foo[T<:Bar:ClassTag:RowReaderFactory] {
  implicit object Mapper extends JavaBeanColumnMapper[T](
    Map("id" -> "id",
        "timestamp" -> "ts"))

  def doSomeStuff(operations: CassandraTableScanRDD[T]): Unit = {
    println("do some stuff here")
  }
}

但是,我遇到了以下错误,我认为这是因为我传入了 RowReaderFactory 并且没有正确指定 RowReaderFactory 的映射。知道如何为RowReaderFactory 指定映射信息吗?

Exception in thread "main" java.lang.IllegalArgumentException: Failed to map constructor parameter timestamp in Bar to a column of MyNamespace
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4$$anonfun$apply$1.apply(DefaultColumnMapper.scala:78)
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4$$anonfun$apply$1.apply(DefaultColumnMapper.scala:78)
    at scala.Option.getOrElse(Option.scala:120)
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4.apply(DefaultColumnMapper.scala:78)
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4.apply(DefaultColumnMapper.scala:76)
    at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
    at com.datastax.spark.connector.mapper.DefaultColumnMapper.columnMapForReading(DefaultColumnMapper.scala:76)
    at com.datastax.spark.connector.rdd.reader.GettableDataToMappedTypeConverter.<init>(GettableDataToMappedTypeConverter.scala:56)
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReader.<init>(ClassBasedRowReader.scala:23)
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:48)
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:43)
    at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.rowReader(CassandraTableRowReaderProvider.scala:48)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader$lzycompute(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:147)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143)

【问题讨论】:

    标签: scala apache-spark spark-cassandra-connector


    【解决方案1】:

    你可以在 Foo 的伴生对象中定义隐式,如下:

    object Foo {
      implicit object Mapper extends JavaBeanColumnMapper[T](
        Map("id" -> "id",
            "timestamp" -> "ts"))
    }
    

    Scala 在尝试为该类查找隐式实例时会查找该类的伴随对象。如果需要,您可以在需要隐式的范围内定义它,但您可能希望添加伴随对象,这样您就不需要在必要时重复它。

    【讨论】:

      【解决方案2】:

      原来columnMapper 必须在创建Foo 实例的范围内创建,而不是在Foo 本身。

      【讨论】:

        猜你喜欢
        • 1970-01-01
        • 2015-08-24
        • 2016-01-06
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 2016-01-30
        • 1970-01-01
        相关资源
        最近更新 更多