【问题标题】:RDD split gives missing parameter typeRDD 拆分给出缺少的参数类型
【发布时间】:2024-05-03 00:20:04
【问题描述】:

我正在尝试拆分最初从 DF 创建的 RDD。不知道为什么会出错。

不写每个列名,但 sql 包含所有列名。所以,sql没有问题。

val df = sql("SELECT col1, col2, col3,... from tableName")
rddF = df.toJavaRDD

rddFtake(1)
res46: Array[org.apache.spark.sql.Row] = Array([2017-02-26,100102-AF,100134402,119855,1004445,0.0000,0.0000,-3.3,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000,0.0000]

scala> rddF.map(x => x.split(","))
<console>:31: error: missing parameter type
       rdd3.map(x => x.split(","))

关于错误的任何想法?我正在使用Spark 2.2.0

【问题讨论】:

    标签: hadoop apache-spark rdd


    【解决方案1】:

    rddFan Array of Row,正如您在 res46: Array[org.apache.spark.sql.Row] 中看到的那样,您不能在 拆分字符串时将 splitRow 联系起来

    您可以执行以下操作

    val df = sql("SELECT col1, col2, col3,... from tableName")
    val rddF = dff.rdd
    
    rddF.map(x => (x.getAs("col1"), x.getAs[String]("col2"), x.get(2)))
    

    【讨论】:

      最近更新 更多