【发布时间】:2018-03-06 09:43:56
【问题描述】:
我正在尝试使用 Spark 创建配置单元表。 我收到以下错误 -
+- TungstenAggregate(key=[rpt_prd#244,country_code#240,product_code#242], functions=[(count(1),mode=Partial,isDistinct=false)], output=[rpt_prd#244,co
untry_code#240,product_code#242,count#832L])
+- HiveTableScan [rpt_prd#244,country_code#240,product_code#242], MetastoreRelation gfrrtnsd_standardization, pln_arrg_dim, None, [(country_code#24
0 = HK)]
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
TungstenExchange hashpartitioning(rpt_prd#236,200), None
+- Sort [rpt_prd#236 ASC], true, 0
+- ConvertToUnsafe
+- Exchange rangepartitioning(rpt_prd#236 ASC,200), None
+- ConvertToSafe
+- TungstenAggregate(key=[rpt_prd#244,country_code#240,product_code#242], functions=[(count(1),mode=Final,isDistinct=false)], output=[rpt_prd#236,Dim_countr
y_code#237,Dim_product_code#238,Dim_recordCount#239L])
+- TungstenExchange hashpartitioning(rpt_prd#244,country_code#240,product_code#242,200), None
+- TungstenAggregate(key=[rpt_prd#244,country_code#240,product_code#242], functions=[(count(1),mode=Partial,isDistinct=false)], output=[rpt_prd#244,co
untry_code#240,product_c`enter code here`ode#242,count#832L])
+- HiveTableScan [rpt_prd#244,country_code#240,product_code#242], MetastoreRelation gfrrtnsd_standardization, pln_arrg_dim, None, [(country_code#24
0 = HK)]
请协助从 spark 中创建 hive 表。
【问题讨论】:
标签: scala apache-spark hiveql