【发布时间】:2020-10-19 08:52:12
【问题描述】:
我有以下代码:
Dataset <Row> dataframe = dfjoin.select(when(df1.col("dateTracking_hour_minute")
.between(df.col("heureDebut"),df.col("heureFin")),
dfjoin.filter(col("acc_status").equalTo(0).and(col("acc_previous").equalTo(1)))));
当我运行时,它会抛出异常:
java.lang.RuntimeException: Unsupported literal type class org.apache.spark.sql.Dataset [ID_tracking: bigint, tracking_time: timestamp ... 109 more fields]
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:78)
at org.apache.spark.sql.catalyst.expressions.Literal$.$anonfun$create$2(literals.scala:164)
at scala.util.Failure.getOrElse(Try.scala:222)
at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:164)
at org.apache.spark.sql.functions$.typedLit(functions.scala:127)
at org.apache.spark.sql.functions$.lit(functions.scala:110)
at org.apache.spark.sql.functions$.when(functions.scala:1341)
at org.apache.spark.sql.functions.when(functions.scala)
at factory.Arret_Alert.check(Arret_Alert.java:44)
有什么想法吗?
谢谢
【问题讨论】:
标签: java mysql dataframe apache-spark