【发布时间】:2017-08-22 12:18:13
【问题描述】:
通过连接两个表得到数据。
joinDataRdd.take(5).foreach(println)
(41234,((102921,249,2,109.94,54.97),(2014-04-04 00:00:00.0,3182,PENDING_PAYMENT)))
(65722,((164249,365,2,119.98,59.99),(2014-05-23 00:00:00.0,4077,COMPLETE)))
(65722,((164250,730,5,400.0,80.0),(2014-05-23 00:00:00.0,4077,COMPLETE)))
(65722,((164251,1004,1,399.98,399.98),(2014-05-23 00:00:00.0,4077,COMPLETE)))
(65722,((164252,627,5,199.95,39.99),(2014-05-23 00:00:00.0,4077,COMPLETE)))
当我试图获得关注时 val data = joinDataRdd.map(x=>(x._1,x._2._1.split(",")(3)))
它正在引发错误: 值拆分不是 (String, String, String, String, String) 的成员
【问题讨论】:
标签: scala apache-spark apache-spark-sql spark-dataframe