【发布时间】:2026-01-20 00:55:01
【问题描述】:
我有一个查询在减少时失败,抛出的错误是:
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
但是,当深入了解 YARN 日志时,我发现了这一点:
错误:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive 运行时错误,同时处理行 (tag=0) {"key":{"reducesinkkey0":"2020-05 -05","reducesinkkey1":10039,"reducesinkkey2":103,"reducesinkkey3":"2020-05-05","reducesinkkey4":10039,"reducesinkkey5":103},"value":{"_col0": 103,"_col1":["1","2"]}} 在 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:265) 在 org.apache.hadoop.mapred .ReduceTask.runOldReducer(ReduceTask.java:444) 在 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) 在 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)在 java.security.AccessController.doPrivileged(Native Method) 在 javax.security.auth.Subject.doAs(Subject.java:422) 在 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) 在 org .apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 原因:org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key": {"r educesinkkey0":"2020-05-05","reducesinkkey1":10039,"reducesinkkey2":103,"reducesinkkey3":"2020-05-05","reducesinkkey4":10039,"reducesinkkey5":103},"value ":{"_col0":103,"_col1":["1","2"]}} 在 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:253) 。 .. 7 更多原因:java.lang.ClassCastException: java.util.ArrayList 无法转换为 org.apache.hadoop.io.Text
最相关的部分是:
java.util.ArrayList 无法转换为 org.apache.hadoop.io.Text
这是我正在执行的查询(仅供参考:这是更大查询中的子查询):
SELECT
yyyy_mm_dd,
h_id,
MAX(CASE WHEN rn=1 THEN prov_id ELSE NULL END) OVER (partition by yyyy_mm_dd, h_id) as primary_prov,
collect_set(api) OVER (partition by yyyy_mm_dd, h_id, p_id) prov_id_api, --re-assemple array to include all elements from multiple initial arrays if there are different arrays per prov_id
prov_id
FROM(
SELECT --get "primary prov" (first element in ascending array))
yyyy_mm_dd,
h_id,
prov_id,
api,
ROW_NUMBER() OVER(PARTITION BY yyyy_mm_dd, h_id ORDER BY api) rn
FROM(
SELECT --explode array to get data at row level
t.yyyy_mm_dd,
t.h_id,
prov_id,
collect_set(--array of integers, use set to remove duplicates
CASE
WHEN e.apis_xml_element = 'res' THEN 1
WHEN e.apis_xml_element = 'av' THEN 2
...
...
ELSE e.apis_xml_element
END) as api
FROM
mytable t
LATERAL VIEW EXPLODE(apis_xml) e AS apis_xml_element
WHERE
yyyy_mm_dd = "2020-05-05"
AND t.apis_xml IS NOT NULL
GROUP BY
1,2,3
)s
)s
我已将问题进一步缩小到*选择,因为内部选择本身可以正常工作,这让我相信问题在这里特别发生:
collect_set(api) OVER (partition by yyyy_mm_dd, h_id, prov_id) prov_id_api
但是,我不确定如何解决它。在最内部的选择中,apis_xml 是一个 array<string>,它将诸如 'res' 和 'av' 之类的字符串保存到一个点为止。然后使用整数。因此使用 case 语句来对齐这些。
奇怪的是,如果我通过 Spark 运行它,即spark.sql=(above_query),它可以工作。但是,在通过 HQL 的直线上,作业会被终止。
【问题讨论】: