【问题标题】:sql query postgres count nested loop bad performancessql 查询 postgres 计数嵌套循环性能不佳
【发布时间】:2017-11-06 03:29:09
【问题描述】:

您好,我似乎找不到正确的答案,所以我不妨写一个帖子

任何数据库专家都可以帮助我改进以下查询(请参阅解释计划),这大大减慢了我们在生产中的应用程序。

  • 出价与不动产相关
  • 不动产归代理机构所有
  • 我正在使用 postgres
  • 一个表存储每个用户的视图:HIT(user_id, bid_id, date)

目的是检索每次出价的点击次数 特定机构

这里是查询

select hit.bid_id , count(hit.id)
from hit
  cross join bid
  cross join realty
where hit.bid_id=bid.id
  and realty.id=bid.realty_id
  and realty.agency_id = 91
group by hit.bid_id
order by count(hit.id) desc

这里是解释计划

"Sort  (cost=167474.69..167493.30 rows=7445 width=16)"
"  Sort Key: (count(hit.id)) DESC"
"  ->  HashAggregate  (cost=166921.45..166995.90 rows=7445 width=16)"
"        Group Key: hit.bid_id"
"        ->  Nested Loop  (cost=694.81..162541.34 rows=876021 width=16)"
"              ->  Hash Join  (cost=694.38..7217.46 rows=1986 width=8)"
"                    Hash Cond: (bid.realty_id = realty.id)"
"                    ->  Seq Scan on bid  (cost=0.00..6398.98 rows=27798 width=16)"
"                    ->  Hash  (cost=669.92..669.92 rows=1957 width=8)"
"                          ->  Bitmap Heap Scan on realty  (cost=63.45..669.92 rows=1957 width=8)"
"                                Recheck Cond: (agency_id = 91)"
"                                ->  Bitmap Index Scan on agency_idx  (cost=0.00..62.97 rows=1957 width=0)"
"                                      Index Cond: (agency_id = 91)"
"              ->  Index Scan using hit_bid_id_idx on hit  (cost=0.43..61.74 rows=1647 width=16)"
"                    Index Cond: (bid_id = bid.id)"

我尝试使用exists,或select in,但它们更糟糕

[编辑] 我正在使用生成交叉连接的 QueryDsl (java api),但即使使用内部连接,执行计划也太长, 这是详细的解释计划

"Sort  (cost=169479.60..169498.99 rows=7756 width=16) (actual time=15350.858..15351.819 rows=821 loops=1)"
"  Output: hit.bid_id, (count(hit.id))"
"  Sort Key: (count(hit.id)) DESC"
"  Sort Method: quicksort  Memory: 63kB"
"  ->  HashAggregate  (cost=168900.96..168978.52 rows=7756 width=16) (actual time=15348.418..15349.550 rows=821 loops=1)"
"        Output: hit.bid_id, count(hit.id)"
"        Group Key: hit.bid_id"
"        ->  Nested Loop  (cost=699.70..164385.85 rows=903022 width=16) (actual time=17.777..14364.165 rows=582723 loops=1)"
"              Output: hit.bid_id, hit.id"
"              ->  Hash Join  (cost=699.26..7225.23 rows=2013 width=8) (actual time=8.427..146.966 rows=1977 loops=1)"
"                    Output: bid.id"
"                    Hash Cond: (bid.realty_id = realty.id)"
"                    ->  Seq Scan on public.bid  (cost=0.00..6400.88 rows=27988 width=16) (actual time=0.018..84.389 rows=27994 loops=1)"
"                          Output: bid.id, bid.created_by, bid.created_date, bid.last_modified_by, bid.last_modified_date, bid.agency_costs, bid.availability_begin_date, bid.availability_end_date, bid.bail, bid.description, bid.imported_bid, bid.is_availabl (...)"
"                    ->  Hash  (cost=674.46..674.46 rows=1984 width=8) (actual time=8.186..8.186 rows=1977 loops=1)"
"                          Output: realty.id"
"                          Buckets: 2048  Batches: 1  Memory Usage: 94kB"
"                          ->  Bitmap Heap Scan on public.realty  (cost=67.66..674.46 rows=1984 width=8) (actual time=0.533..4.967 rows=1977 loops=1)"
"                                Output: realty.id"
"                                Recheck Cond: (realty.agency_id = 91)"
"                                Heap Blocks: exact=208"
"                                ->  Bitmap Index Scan on agency_idx  (cost=0.00..67.17 rows=1984 width=0) (actual time=0.491..0.491 rows=1978 loops=1)"
"                                      Index Cond: (realty.agency_id = 91)"
"              ->  Index Scan using hit_bid_id_idx on public.hit  (cost=0.43..61.88 rows=1619 width=16) (actual time=2.198..6.376 rows=295 loops=1977)"
"                    Output: hit.id, hit.created_by, hit.created_date, hit.last_modified_by, hit.last_modified_date, hit.date, hit.ip, hit.user_id, hit.bid_id, hit.display_phone"
"                    Index Cond: (hit.bid_id = bid.id)"
"Planning time: 3.037 ms"
"Execution time: 15353.187 ms"

表格 DDL

CREATE TABLE public.bid
(
  id bigint NOT NULL,
  realty_id bigint,
  CONSTRAINT bid_pkey PRIMARY KEY (id),
  CONSTRAINT bid_fkey_realty FOREIGN KEY (realty_id)
      REFERENCES public.realty (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)

CREATE TABLE public.hit
(
  id bigint NOT NULL,
  bid_id bigint,
  CONSTRAINT hit_pkey PRIMARY KEY (id),
  CONSTRAINT hit_fkey_bid FOREIGN KEY (bid_id)
      REFERENCES public.bid (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)

CREATE TABLE public.realty
(
  id bigint NOT NULL,
  CONSTRAINT realty_pkey PRIMARY KEY (id)
)

【问题讨论】:

  • 请添加每个表的DDL
  • Edit您的问题并添加使用explain (analyze, verbose, buffers)生成的执行计划。但是那些交叉连接没有任何意义。你为什么不直接使用普通的join,因为显然这就是你想要做的
  • 谢谢,我已经详细编辑了帖子,交叉连接是由 QueryDsl (java api) 生成的,我尝试过内部连接,但执行计划非常相似

标签: sql postgresql query-performance


【解决方案1】:

您不必要地使用了交叉连接,除此之外,您的解释计划中的投标表还有“seq scan”;但以下的解释计划可能不同:

select hit.bid_id , count(hit.id)
from hit
  inner join bid ON hit.bid_id=bid.id
  inner join realty ON realty.id=bid.realty_id
where realty.agency_id = 91
group by hit.bid_id
order by count(hit.id) desc

虽然没关系,但改变表格顺序可能会产生影响:

select hit.bid_id , count(hit.id)
from realty
  inner join bid ON realty.id=bid.realty_id
  inner join hit ON hit.bid_id=bid.id
where realty.agency_id = 91
group by hit.bid_id
order by count(hit.id) desc

我可以假设数据库统计信息是最新的或“新鲜的”吗?

【讨论】:

  • 谢谢,我已经编辑了帖子,内部连接没有任何区别,是的,统计数据是新鲜的。
【解决方案2】:

如果您提供更多信息(例如索引状态、表描述、解释计划以及更多详细信息选项),我会建议更有趣的解决方案。 但是好的解决方案将由其他人提供。我提供了不好的解决方案,但有时它会有所帮助。

此源基于 Java,但适用于其他语言。

这是一项临时工作,但您可能会立即生效。

试一试非嵌套循环执行计划。

PreparedStatement stmt = connect.preparedStatement(
"SET enable_nestloop TO false;" +
"select hit.bid_id , count(hit.id)
from hit
cross join bid
cross join realty
where hit.bid_id=bid.id
and realty.id=bid.realty_id
and realty.agency_id = 91
group by hit.bid_id
order by count(hit.id) desc;" +
"SET enable_nestloop TO true;"
);

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2016-07-26
    • 2015-07-20
    • 2021-10-05
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多