【问题标题】:Performance is not increased even increased the work_mem size即使增加了 work_mem 大小,性能也没有提高
【发布时间】:2019-11-06 17:10:37
【问题描述】:

我返回一个平均执行时间为 170 秒的查询。我浏览了 PSQL 文档,他们提到如果我们增加 work_mem 性能将会提高。即使性能没有提高,我也将 work_mem 增加到 1000 MB。

注意:我索引了所有属于查询部分的字段。

下面我粘贴数据库中存在的记录、查询执行计划、查询、结果。

  • 数据库中存在的记录数:
event_logs=> select count(*) from events;
  count   
----------
 18706734
(1 row)
  • 查询:
select raw->'request_payload'->'source'->0 as file, 
       count(raw->'request_payload'->>'status') as count, 
       raw->'request_payload'->>'status' as status 
from events 
where client = 'NTT' 
  and to_char(datetime, 'YYYY-MM-DD') = '2019-10-31' 
  and event_name = 'wbs_indexing' 
group by raw->'request_payload'->'source'->0, 
         raw->'request_payload'->>'status';
  • 结果:
 file                   | count  | status  
-----------------------------+--------+--
 "xyz.csv"              |  91878 | failure
 "abc.csv"              |  91816 | failure
 "efg.csv"              | 398196 | failure
(3 rows)

  • 默认 work_mem(4 MB) 查询执行计划:
event_logs=> SHOW work_mem;
 work_mem 
----------
 4MB
(1 row)

event_logs=> explain analyze select raw->'request_payload'->'source'->0 as file, count(raw->'request_payload'->>'status') as count,  raw->'request_payload'->>'status' as status from events where to_char(datetime, 'YYYY-MM-DD') = '2019-10-31' and client = 'NTT'  and event_name = 'wbs_indexing' group by raw->'request_payload'->'source'->0, raw->'request_payload'->>'status';
                                                                             QUERY PLAN                                                       

----------------------------------------------------------------------------------------------------------------------------------------------
-----------------------
 Finalize GroupAggregate  (cost=3256017.54..3267087.56 rows=78474 width=72) (actual time=172547.598..172965.581 rows=3 loops=1)
   Group Key: ((((raw -> 'request_payload'::text) -> 'source'::text) -> 0)), (((raw -> 'request_payload'::text) ->> 'status'::text))
   ->  Gather Merge  (cost=3256017.54..3264829.34 rows=65674 width=72) (actual time=172295.204..172965.630 rows=9 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial GroupAggregate  (cost=3255017.52..3256248.91 rows=32837 width=72) (actual time=172258.342..172737.534 rows=3 loops=3)
               Group Key: ((((raw -> 'request_payload'::text) -> 'source'::text) -> 0)), (((raw -> 'request_payload'::text) ->> 'status'::text))
               ->  Sort  (cost=3255017.52..3255099.61 rows=32837 width=533) (actual time=171794.584..172639.670 rows=193963 loops=3)
                     Sort Key: ((((raw -> 'request_payload'::text) -> 'source'::text) -> 0)), (((raw -> 'request_payload'::text) ->> 'status'::text))
                     Sort Method: external merge  Disk: 131856kB
                     ->  Parallel Seq Scan on events  (cost=0.00..3244696.75 rows=32837 width=533) (actual time=98846.155..169311.063 rows=193963 loops=3)
                           Filter: ((client = 'NTT'::text) AND (event_name = 'wbs_indexing'::text) AND (to_char(datetime, 'YYYY-MM-DD'::text) = '2019-10-31'::text))
                           Rows Removed by Filter: 6041677
 Planning time: 0.953 ms
 Execution time: 172983.273 ms
(15 rows)

  • 增加了 work_mem(1000 MB) 查询执行计划:
event_logs=> SHOW work_mem;
 work_mem 
----------
 1000MB
(1 row)

event_logs=> explain analyze select raw->'request_payload'->'source'->0 as file, count(raw->'request_payload'->>'status') as count,  raw->'request_payload'->>'status' as status from events where to_char(datetime, 'YYYY-MM-DD') = '2019-10-31' and client = 'NTT'  and event_name = 'wbs_indexing' group by raw->'request_payload'->'source'->0, raw->'request_payload'->>'status';
                                                                            QUERY PLAN                                                                              
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize GroupAggregate  (cost=3248160.04..3259230.06 rows=78474 width=72) (actual time=167979.419..168189.228 rows=3 loops=1)
   Group Key: ((((raw -> 'request_payload'::text) -> 'source'::text) -> 0)), (((raw -> 'request_payload'::text) ->> 'status'::text))
   ->  Gather Merge  (cost=3248160.04..3256971.84 rows=65674 width=72) (actual time=167949.951..168189.282 rows=9 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial GroupAggregate  (cost=3247160.02..3248391.41 rows=32837 width=72) (actual time=167945.607..168083.707 rows=3 loops=3)
               Group Key: ((((raw -> 'request_payload'::text) -> 'source'::text) -> 0)), (((raw -> 'request_payload'::text) ->> 'status'::text))
               ->  Sort  (cost=3247160.02..3247242.11 rows=32837 width=533) (actual time=167917.891..167975.549 rows=193963 loops=3)
                     Sort Key: ((((raw -> 'request_payload'::text) -> 'source'::text) -> 0)), (((raw -> 'request_payload'::text) ->> 'status'::text))
                     Sort Method: quicksort  Memory: 191822kB
                     ->  Parallel Seq Scan on events  (cost=0.00..3244696.75 rows=32837 width=533) (actual time=98849.936..167570.669 rows=193963 loops=3)
                           Filter: ((client = 'NTT'::text) AND (event_name = 'wbs_indexing'::text) AND (to_char(datetime, 'YYYY-MM-DD'::text) = '2019-10-31'::text))
                           Rows Removed by Filter: 6041677
 Planning time: 0.238 ms
 Execution time: 168199.046 ms
(15 rows)

  • 谁能帮我提高这个查询的性能?

【问题讨论】:

  • work_mem 的增加使您摆脱了磁盘排序,但 seqscan 仍然占用了大部分时间。您是否在 events 表上为 (client, event_name, to_char(datetime, 'YYYY-MM-DD'::text)) 列创建了索引?
  • 是的,我为查询中的所有列名编制了索引。

标签: postgresql query-performance


【解决方案1】:

增加 work_mem 似乎确实使排序速度提高了大约 8 倍:(172639.670 - 169311.063) / (167975.549 - 167570.669)。但是由于排序只占用了整个执行时间的一小部分,即使将其速度提高 1000 倍也不能让事情变得更好。占用时间的是seq扫描。

seq 扫描中的大部分时间可能都花在了 IO 上。开启 track_io_timing 后运行EXPLAIN (ANALYZE, BUFFERS) 即可查看。

此外,并行化 seq 扫描通常不是很有帮助,因为 IO 系统通常能够将其全部容量交付给单个读取器,这要归功于预读的魔力。有时并行阅读器甚至会互相踩到对方的脚趾,使整个性能变得更糟。您可以使用 set max_parallel_workers_per_gather TO 0; 禁用并行化,这可能会使事情变得更快,如果不这样做,至少会使 EXPLAIN 计划更易于理解。

您正在获取超过 3% 的表格:193963 / (193963 + 6041677)。当您获取这么多索引时,索引可能不是很有帮助。如果是这样,您将需要一个组合索引,而不是单个索引。所以你会想要一个(client, event_name, date(datetime)) 的索引。然后您还需要将查询更改为使用date(datetime) 而不是to_char(datetime, 'YYYY-MM-DD')。您需要进行此更改,因为 to_char 不是不可变的,因此不能被索引。

【讨论】:

  • 将尝试回发响应。谢谢!
【解决方案2】:

问题已通过修改查询得到解决。这是 to_char 方法的问题。它将表中每条记录的日期对象转换为字符串日期,以匹配给定的字符串日期。所以我更新了查询,比如提取给定日期和第二天日期之间的记录。现在我在 500 毫秒内得到响应。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2011-01-20
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2011-11-19
    相关资源
    最近更新 更多