【问题标题】:Limit slows down my postgres query限制减慢了我的 postgres 查询
【发布时间】:2014-12-11 11:54:11
【问题描述】:

您好,我在一个运行得非常快的单个表上有一个简单的查询,但我想分页我的结果,而 LIMIT 会令人难以置信地减慢选择速度。该表包含大约 8000 万行。我在 postgres 9.2 上。

没有 LIMIT 需要 330 毫秒并返回 2100 行

EXPLAIN SELECT * from interval where username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc

Sort  (cost=156599.71..156622.43 rows=45438 width=108)"
  Sort Key: "time""
  ->  Bitmap Heap Scan on "interval"  (cost=1608.05..155896.71 rows=45438 width=108)"
        Recheck Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)"
        ->  Bitmap Index Scan on interval_username  (cost=0.00..1605.77 rows=45438 width=0)"
              Index Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)

EXPLAIN ANALYZE SELECT * from interval where 
username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc

Sort  (cost=156599.71..156622.43 rows=45438 width=108) (actual time=1.734..1.887 rows=2131 loops=1)
  Sort Key: id
  Sort Method: quicksort  Memory: 396kB
  ->  Bitmap Heap Scan on "interval"  (cost=1608.05..155896.71 rows=45438 width=108) (actual time=0.425..0.934 rows=2131 loops=1)
        Recheck Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)
        ->  Bitmap Index Scan on interval_username  (cost=0.00..1605.77 rows=45438 width=0) (actual time=0.402..0.402 rows=2131 loops=1)
              Index Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)
Total runtime: 2.065 ms

使用 LIMIT 需要几分钟(我从不等它结束)

EXPLAIN SELECT * from interval where username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc LIMIT 10

Limit  (cost=0.00..6693.99 rows=10 width=108)
  ->  Index Scan Backward using interval_time on "interval"  (cost=0.00..30416156.03 rows=45438 width=108)
        Filter: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)

表定义

-- Table: "interval"

-- DROP TABLE "interval";

CREATE TABLE "interval"
(
  uuid character varying(255) NOT NULL,
  deleted boolean NOT NULL,
  id bigint NOT NULL,
  "interval" bigint NOT NULL,
  "time" timestamp without time zone,
  trackerversion character varying(255),
  username character varying(255),
  CONSTRAINT interval_pkey PRIMARY KEY (uuid),
  CONSTRAINT fk_272h71b2gfyov9fwnksyditdd FOREIGN KEY (username)
      REFERENCES appuser (panelistcode) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE CASCADE,
  CONSTRAINT uk_hyi5iws50qif6jwky9xcch3of UNIQUE (id)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE "interval"
  OWNER TO postgres;

-- Index: interval_time

-- DROP INDEX interval_time;

CREATE INDEX interval_time
  ON "interval"
  USING btree
  ("time");

-- Index: interval_username

-- DROP INDEX interval_username;

CREATE INDEX interval_username
  ON "interval"
  USING btree
  (username COLLATE pg_catalog."default");

-- Index: interval_uuid

-- DROP INDEX interval_uuid;

CREATE INDEX interval_uuid
  ON "interval"
  USING btree
  (uuid COLLATE pg_catalog."default");

更多结果

SELECT n_distinct FROM pg_stats WHERE tablename='interval' AND attname='username';
n_distinct=1460

SELECT AVG(length) FROM (SELECT username, COUNT(*) AS length FROM interval GROUP BY username) as freq;
45786.022605591910

SELECT COUNT(*) FROM interval WHERE username='1228321f131084766f3b0c6e40bc5edc41d4677e';
2131

【问题讨论】:

  • 你能给我们看看 EXPLAIN ANALYZE 的结果吗?
  • 该列的n_distinct 值是多少? SELECT n_distinct FROM pg_stats WHERE tablename='interval' AND attname='username';
  • @kouber n_distinct=1460
  • @frank 我已经为没有限制的查询添加了分析,其他仍在运行
  • 和它相比如何:SELECT AVG(length) FROM (SELECT username, COUNT(*) AS length FROM interval GROUP BY username) as freq;

标签: postgresql limit


【解决方案1】:

规划器预计 username '1228321f131084766f3b0c6e40bc5edc41d4677e' 的行数为 45438 行,而实际上它只有 2131 行,因此它认为它会通过向后查看 interval_time 索引更快地找到您想要的 10 行。

在用户名列上尝试increasing the stats,看看查询计划是否会改变。

ALTER TABLE interval ALTER COLUMN username SET STATISTICS 100;

ANALYZE interval;

您可以尝试不同的统计值,最高可达 10000。

如果您仍然对计划不满意,并且您确信可以比计划者做得更好并且知道自己在做什么,那么您可以绕过任何通过对其执行一些不会更改其值的操作来轻松索引。

例如,您可以使用ORDER BY time + '0 seconds'::interval,而不是ORDER BY time。这样,存储在表中的time 值的任何索引都将被绕过。对于整数值,您可以乘 * 1 等。

【讨论】:

  • 嗨,我尝试了 100 和 1000,但什么也没发生……但后来 10000 成功了!他改变了查询计划,你有这个设置的好文档的链接吗?
  • hm 但是对于有 10030 个条目的用户,它会退回到旧的查询计划,知道为什么吗?
  • 这种情况下查询性能更好,还是规划器还是错了?
  • 他还是错了……有了正确的我立竿见影,错误的计划永远不会停止……
【解决方案2】:

http://thebuild.com/blog/2014/11/18/when-limit-attacks/ 页面表明我可以通过使用 CTE 来强制 postgres 做得更好

WITH inner_query AS (SELECT * from interval where username='7823721a3eb9243be63c6c3a13dffee44753cda6')
SELECT * FROM inner_query order by time desc LIMIT 10;

【讨论】:

  • 无论哪种方式,或者通过简单地乘以所涉及的值。我相应地编辑了答案,还包括指向统计文档的链接。
猜你喜欢
  • 2019-11-22
  • 1970-01-01
  • 2017-12-24
  • 2013-10-15
  • 1970-01-01
  • 1970-01-01
  • 2021-04-14
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多