【问题标题】:How avoid cross join in hive?如何避免在蜂巢中交叉加入?
【发布时间】:2019-04-10 14:42:22
【问题描述】:

我有两张桌子。一个包含 100 万条记录,另一个包含 2000 万条记录。

表格1 价值 (1, 1) (2, 2) (3, 3) (4, 4) (5, 4) …… 表 2 价值 (55, 11) (33, 22) (44, 66) (22, 11) (11, 33) ……

我需要用表1中的值乘以表2中的值,得到结果的排名,得到排名前5。 他们的结果是:

表 1 中的值,表 1 中每个值的前 5 (1, 1), 1*44 + 1*66 = 110 (1, 1), 1*55 + 1*11 = 66 (1, 1), 1*33 + 1*22 = 55 (1, 1), 1*11 + 1*33 = 44 (1, 1), 1*22 + 1* 11 = 33 ......

我尝试在 hive 中使用交叉连接。但我总是因为表太大而失败。

【问题讨论】:

    标签: sql hive query-optimization hiveql cross-join


    【解决方案1】:

    首先从表 2 中选择前 5 个,然后与第一个表进行交叉连接。这将与交叉连接两个表并在交叉连接后取 top5 相同,但在第一种情况下连接的行数会少得多。带有 5 行小数据集的交叉连接将被转换为 map-join 并以与 table1 全扫描一样快的速度执行。

    看下面的演示。交叉连接被转换为映射连接。请注意计划中的"Map Join Operator" 和此警告:"Warning: Map Join MAPJOIN[19][bigTable=?] in task 'Map 1' is a cross product"

    hive> set hive.cbo.enable=true;
    hive> set hive.compute.query.using.stats=true;
    hive> set hive.execution.engine=tez;
    hive> set hive.auto.convert.join.noconditionaltask=false;
    hive> set hive.auto.convert.join=true;
    hive> set hive.vectorized.execution.enabled=true;
    hive> set hive.vectorized.execution.reduce.enabled=true;
    hive> set hive.vectorized.execution.mapjoin.native.enabled=true;
    hive> set hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled=true;
    hive>
        > explain
        > with table1 as (
        > select stack(5,1,2,3,4,5) as id
        > ),
        > table2 as
        > (select t2.id
        >    from (select t2.id, dense_rank() over(order by id desc) rnk
        >            from (select stack(11,55,33,44,22,11,1,2,3,4,5,6) as id) t2
        >         )t2
        >   where t2.rnk<6
        > )
        > select t1.id, t1.id*t2.id
        >   from table1 t1
        >        cross join table2 t2;
    Warning: Map Join MAPJOIN[19][bigTable=?] in task 'Map 1' is a cross product
    OK
    Plan not optimized by CBO.
    
    Vertex dependency in root stage
    Map 1 <- Reducer 3 (BROADCAST_EDGE)
    Reducer 3 <- Map 2 (SIMPLE_EDGE)
    
    Stage-0
       Fetch Operator
          limit:-1
          Stage-1
             Map 1
             File Output Operator [FS_17]
                compressed:false
                Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
                table:{"serde:":"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe","input format:":"org.apache.hadoop.mapred.TextInputFormat","output format:":"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"}
                Select Operator [SEL_16]
                   outputColumnNames:["_col0","_col1"]
                   Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
                   Map Join Operator [MAPJOIN_19]
                   |  condition map:[{"":"Inner Join 0 to 1"}]
                   |  HybridGraceHashJoin:true
                   |  keys:{}
                   |  outputColumnNames:["_col0","_col1"]
                   |  Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
                   |<-Reducer 3 [BROADCAST_EDGE]
                   |  Reduce Output Operator [RS_14]
                   |     sort order:
                   |     Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
                   |     value expressions:_col0 (type: int)
                   |     Select Operator [SEL_9]
                   |        outputColumnNames:["_col0"]
                   |        Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
                   |        Filter Operator [FIL_18]
                   |           predicate:(dense_rank_window_0 < 6) (type: boolean)
                   |           Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
                   |           PTF Operator [PTF_8]
                   |              Function definitions:[{"Input definition":{"type:":"WINDOWING"}},{"partition by:":"0","name:":"windowingtablefunction","order by:":"_col0(DESC)"}]
                   |              Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
                   |              Select Operator [SEL_7]
                   |              |  outputColumnNames:["_col0"]
                   |              |  Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
                   |              |<-Map 2 [SIMPLE_EDGE]
                   |                 Reduce Output Operator [RS_6]
                   |                    key expressions:0 (type: int), col0 (type: int)
                   |                    Map-reduce partition columns:0 (type: int)
                   |                    sort order:+-
                   |                    Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
                   |                    UDTF Operator [UDTF_5]
                   |                       function name:stack
                   |                       Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
                   |                       Select Operator [SEL_4]
                   |                          outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11"]
                   |                          Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
                   |                          TableScan [TS_3]
                   |                             alias:_dummy_table
                   |                             Statistics:Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: COMPLETE
                   |<-UDTF Operator [UDTF_2]
                         function name:stack
                         Statistics:Num rows: 1 Data size: 24 Basic stats: COMPLETE Column stats: COMPLETE
                         Select Operator [SEL_1]
                            outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5"]
                            Statistics:Num rows: 1 Data size: 24 Basic stats: COMPLETE Column stats: COMPLETE
                            TableScan [TS_0]
                               alias:_dummy_table
                               Statistics:Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: COMPLETE
    
    Time taken: 0.199 seconds, Fetched: 66 row(s)
    

    只需将我演示中的堆栈替换为您的表格即可。

    【讨论】:

    • 非常感谢。很抱歉我没有描述我的问题。表 2 中的值未排序。我会更新我的问题。但你的回答确实解决了我的起源问题。
    • @vitoyan 所以,table2 有两列,对吧?然后使用这些列的总和排名。 1*44 + 1*66 = 1*(44+66) = 110。使用相同的dense_rank() over (order by t2.col1+t2.col2 desc) rnk
    • @vitoyan 如果您对我的回答满意,请投票或接受
    • 根据我的问题描述,您提供了一个完美的解决方案。我将投票并接受您的回答并发布一个新问题。希望您仍然可以提供完美的解决方案。非常感谢。
    猜你喜欢
    • 1970-01-01
    • 2019-02-07
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多