'Is there any way to speed up this Postgres bitmap heap scan?
Database newbie here. This is my query, I'm using Postgres 9.3.5:
=# explain analyse SELECT SUM(actual_cost) as cost, SUM(total_items)
as num_items, processing_date FROM frontend_items
WHERE chemical_id='0501013B0' GROUP BY processing_date;
And this is the query plan:
HashAggregate (cost=1648624.91..1648624.92 rows=1 width=16) (actual time=12591.844..12591.848 rows=17 loops=1)
-> Bitmap Heap Scan on frontend_items (cost=14520.24..1643821.35 rows=640474 width=16) (actual time=254.841..12317.746 rows=724242 loops=1)
Recheck Cond: ((chemical_id)::text = '0501013B0'::text)
-> Bitmap Index Scan on frontend_items_chemical_id_varchar_pattern_ops_idx (cost=0.00..14360.12 rows=640474 width=0) (actual time=209.538..209.538 rows=724242 loops=1)
Index Cond: ((chemical_id)::text = '0501013B0'::text)
Total runtime: 12592.499 ms
As you can see, it's the Bitmap Heap Scan
that takes up most of the time. Is there any way to speed this up?
I can create more indexes if needed: my data is almost read-only (it updates once a month).
I'm guessing there isn't much I can do, given that I want multiple attributes, except pay for enough RAM to hold the entire database in memory, but suggestions would be very much appreciated.
It's possible I could just look up one of these attributes at a time, if that would speed things up.
NB: I'm running this on a Macbook with 16GB of RAM, and an SSD. I have set shared_buffers
to 4GB and work_mem
to 40MB. I will eventually be using a server with 32GB of RAM and an SSD.
UPDATE: The table schema is as follows:
Column | Type | Modifiers
-------------------+-------------------------+--------------------------------------------------------------------
id | integer | not null default nextval('frontend_items_id_seq'::regclass)
presentation_code | character varying(15) | not null
presentation_name | character varying(1000) | not null
total_items | integer | not null
net_cost | double precision | not null
actual_cost | double precision | not null
quantity | double precision | not null
processing_date | date | not null
price_per_unit | double precision | not null
chemical_id | character varying(9) | not null
pct_id | character varying(3) | not null
practice_id | character varying(6) | not null
sha_id | character varying(3) | not null
Indexes:
"frontend_items_pkey" PRIMARY KEY, btree (id)
"frontend_items_45fff4c7" btree (sha_id)
"frontend_items_4e2e609b" btree (pct_id)
"frontend_items_528f368c" btree (processing_date)
"frontend_items_6ea07fe3" btree (practice_id)
"frontend_items_a69d813a" btree (chemical_id)
"frontend_items_b9b2c7ab" btree (presentation_code)
"frontend_items_chemical_id_varchar_pattern_ops_idx" btree (chemical_id varchar_pattern_ops)
"frontend_items_pct_code_id_488a8bbfb2bddc6d_like" btree (pct_id varchar_pattern_ops)
"frontend_items_practice_id_bbbafffdb2c2bf1_like" btree (practice_id varchar_pattern_ops)
"frontend_items_presentation_code_69403ee04fda6522_like" btree (presentation_code varchar_pattern_ops)
"frontend_items_presentation_code_varchar_pattern_ops_idx" btree (presentation_code varchar_pattern_ops)
Foreign-key constraints:
"front_chemical_id_4619f68f65c49a8_fk_frontend_chemical_bnf_code" FOREIGN KEY (chemical_id) REFERENCES frontend_chemical(bnf_code) DEFERRABLE INITIALLY DEFERRED
"frontend__practice_id_bbbafffdb2c2bf1_fk_frontend_practice_code" FOREIGN KEY (practice_id) REFERENCES frontend_practice(code) DEFERRABLE INITIALLY DEFERRED
"frontend_items_pct_id_30c06df242c3d1ba_fk_frontend_pct_code" FOREIGN KEY (pct_id) REFERENCES frontend_pct(code) DEFERRABLE INITIALLY DEFERRED
"frontend_items_sha_id_4fa0ca3c3b9b67f_fk_frontend_sha_code" FOREIGN KEY (sha_id) REFERENCES frontend_sha(code) DEFERRABLE INITIALLY DEFERRED
And here's the output of a verbose explain:
# explain (verbose, buffers, analyse) SELECT SUM(actual_cost) as cost, SUM(total_items) as num_items, processing_date FROM frontend_items WHERE chemical_id='0501012G0' GROUP BY processing_date;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=1415349.73..1415349.74 rows=1 width=16) (actual time=3048.551..3048.556 rows=17 loops=1)
Output: sum(actual_cost), sum(total_items), processing_date
Buffers: shared hit=141958 read=12725
-> Bitmap Heap Scan on public.frontend_items (cost=11797.55..1411446.84 rows=520385 width=16) (actual time=213.889..2834.911 rows=524644 loops=1)
Output: id, presentation_code, presentation_name, total_items, net_cost, actual_cost, quantity, processing_date, price_per_unit, chemical_id, pct_id, practice_id, sha_id
Recheck Cond: ((frontend_items.chemical_id)::text = '0501012G0'::text)
Buffers: shared hit=141958 read=12725
-> Bitmap Index Scan on frontend_items_chemical_id_varchar_pattern_ops_idx (cost=0.00..11667.46 rows=520385 width=0) (actual time=172.574..172.574 rows=524644 loops=1)
Index Cond: ((frontend_items.chemical_id)::text = '0501012G0'::text)
Buffers: shared hit=2 read=2012
Total runtime: 3049.177 ms
Solution 1:[1]
You have 724242 rows and the query takes 12592.499 ms. This is 0.017387 ms per row, i.e. 57514 rows per second. What are you complaining? I think your query is plenty fast. Ordinary HDDs support only rates of 65 - 200 rows per second by using an index scan, although the bitmap index / heap scan is faster. I think you'll find that PostgreSQL is using the best possible query plan for your situation.
If you execute the query again, does it get faster? The caches would be hot then, so a repeated execution might be faster. If it doesn't get faster, then it's unlikely that more memory would help. The data page size of PostgreSQL is 8 KB, so you're accessing at most 724242*8 KB = 5.5 GB of data, i.e. the data should fit into your RAM.
Edit: the second query mentioned on the edited version of the question shows a performance of 172000 rows per second. So, it is possible that such queries become faster if the data is cached in RAM. I would choose the approach of fitting the entire dataset in RAM. RAM is cheap, but developer time is expensive.
Solution 2:[2]
Since, the selected rows are specific, adding a multi column index might be beneficial in this case. It might help on nudging the planner to use an index scan instead of a bitmap scan.
CREATE INDEX ON frontend_items(chemical_id, processing_date, total_items, actual_cost);
Do note, that this is a might, but worth trying when optimizing the query.
Solution 3:[3]
When you want values for multiple items the process can be made much quicker (it helped me to get counting denormalised values from 12 hours to 12 minutes).
You can query value for all the items (chemical
field) in single query. It surely will not be faster than querying one value, but it also might not be much (asymptotically) slower in many cases (the table is scanned only once for all values).
The query would look like this:
SELECT chemical_id, SUM(actual_cost) as cost, SUM(total_items)
as num_items, processing_date FROM frontend_items
GROUP BY chemical_id, processing_date;
When I tested it on table with ~8 million records, I got the single query running in ~0.5 second, and query for aggregated values for items in 2 seconds.
You can also save these results to denormalised fields and show user those (if they don't care about slightly outdated value).
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | juhist |
Solution 2 | Syakur Rahman |
Solution 3 | Petr Dlouhý |