【发布时间】:2020-07-24 15:57:30
【问题描述】:
这个问题是来自(Spark - creating schema programmatically with different data types)的参考
我正在尝试从 rdd 推断模式到 Dataframe ,下面是我的代码
def inferType(field: String) = field.split(":")(1) match {
case "Integer" => IntegerType
case "Double" => DoubleType
case "String" => StringType
case "Timestamp" => TimestampType
case "Date" => DateType
case "Long" => LongType
case _ => StringType
}
val header = c1:String|c2:String|c3:Double|c4:Integer|c5:String|c6:Timestamp|c7:Long|c8:Date
val df1 = Seq(("a|b|44.44|5|c|2018-01-01 01:00:00|456|2018-01-01")).toDF("data")
val rdd1 = df1.rdd.map(x => Row(x.getString(0).split("\\|"): _*))
val schema = StructType(header.split("\\|").map(column => StructField(column.split(":")(0), inferType(column), true)))
val df = spark.createDataFrame(rdd1, schema)
df.show()
当我表演时,它会抛出以下错误。我必须对更大规模的数据执行此操作并且无法找到正确的解决方案,请您帮我找到解决方案或任何其他方式,我可以实现这一点。
java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of int
提前致谢
【问题讨论】:
标签: scala dataframe apache-spark apache-spark-sql