'Strange queryplan jumps on partitioned table on old postgres 9.3

Our solution: We have a trigger based partitioned table. BUT the insert trigger return NEW because hibernate must have rowcount 1 so all insert is inserted parent table too. But we have an after insert trigger which delete it. So we have a parent table with 0 records, but very dirty table.

config: constraint_exclusion on

And nowadays there is a strange behavior.

Select :

SELECT * 
FROM partitioned_table 
WHERE partition_column=x;

Somtimes, not immediate after autovacuum the good plan changed to bad plan:

bad plan:

Append  (cost=247.80..29934.72 rows=13299 width=2073) (actual time=0.064..1.848 rows=755 loops=1)
  Buffers: shared hit=517
  ->  Bitmap Heap Scan on public.ftr_ugyletadat ugyletadat0_  (cost=247.80..26405.17 rows=11790 width=2246) (actual time=0.022..0.022 rows=0 loops=1)
        Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain
        Recheck Cond: (ugyletadat0_.idugylet = 16179457::numeric)
        Buffers: shared hit=3
        ->  Bitmap Index Scan on ix_ftr_ugyletadat_idugylet  (cost=0.00..244.86 rows=11790 width=0) (actual time=0.019..0.019 rows=0 loops=1)
              Index Cond: (ugyletadat0_.idugylet = 16179457::numeric)
              Buffers: shared hit=3
  ->  Index Scan using ix_ugyletadat_kulcs_ugylet_32 on public.ftr_ugyletadat_32 ugyletadat0__1  (cost=0.70..3529.55 rows=1509 width=723) (actual time=0.039..0.990 rows=755 loops=1)
        Output: ugyletadat0__1.id, ugyletadat0__1.modositva, ugyletadat0__1.idugylet, ugyletadat0__1.iddokumentum, ugyletadat0__1.kulcs, ugyletadat0__1.osztaly, ugyletadat0__1.doublevalue, ugyletadat0__1.longvalue, ugyletadat0__1.datevalue, ugyletadat0__1.stringvalue, ugyletadat0__1.stringvalue2, ugyletadat0__1.megjegyzesdomain
        Index Cond: (ugyletadat0__1.idugylet = 16179457::numeric)
        Buffers: shared hit=514
Total runtime: 2.353 ms

good plan:

Append  (cost=0.38..3537.94 rows=1510 width=724) (actual time=0.043..1.819 rows=755 loops=1)
  Buffers: shared hit=517
  ->  Index Scan using ix_ftr_ugyletadat_idugylet on public.ftr_ugyletadat ugyletadat0_  (cost=0.38..8.39 rows=1 width=2246) (actual time=0.015..0.015 rows=0 loops=1)
        Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain
        Index Cond: (ugyletadat0_.idugylet = 16179457::numeric)
        Buffers: shared hit=3
  ->  Index Scan using ix_ugyletadat_kulcs_ugylet_32 on public.ftr_ugyletadat_32 ugyletadat0__1  (cost=0.70..3529.55 rows=1509 width=723) (actual time=0.025..0.957 rows=755 loops=1)
        Output: ugyletadat0__1.id, ugyletadat0__1.modositva, ugyletadat0__1.idugylet, ugyletadat0__1.iddokumentum, ugyletadat0__1.kulcs, ugyletadat0__1.osztaly, ugyletadat0__1.doublevalue, ugyletadat0__1.longvalue, ugyletadat0__1.datevalue, ugyletadat0__1.stringvalue, ugyletadat0__1.stringvalue2, ugyletadat0__1.megjegyzesdomain
        Index Cond: (ugyletadat0__1.idugylet = 16179457::numeric)
        Buffers: shared hit=514
Total runtime: 2.300 ms

I remarked thats:

  1. I run VACUUM ANALYZE partitioned_table. Immediatly after the starting changed the plan to good. But the Vacuum analyze run about 10 minutes.

  2. postgres stistics changes

    SELECT relname, reltuples FROM pg_class WHERE relname like 'ftr_ugyletadat';

The rel_tuples value always jumps 0 (good plan) or about 2,5 million +-10000 (bad plan).

I created an index on base (empty table) so there is not seq scan just index.

Some posts said a check contraint to base table. it is empty, but couse of Hibernate problem we must insert rows there too.

  • Why jumps rel_tuples value and why just 0 and about 2,5 million +- 10000 there is no other value?
  • What can I do to be faster?
  • Is a good plan change main table to view with triggers?

Thanks a lot.

Real plan in good time:

Hash Left Join  (cost=28.95..24265.05 rows=1357 width=2368) (actual time=1.570..5.750 rows=427 loops=1)
  Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain, dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet, dokumentum1_1_.id, dokumentum1_1_.irat, dokumentum1_1_.idugylet, dokumentum1_1_.iddokumentumtipus, dokumentum1_1_.tomoritett, dokumentum1_1_.idszulodokumentum, dokumentum1_1_.idiktatas, dokumentum1_1_.idszotarstatusz, dokumentum1_1_.dokumentumnev, dokumentum1_1_.formatum, dokumentum1_1_.altipus_megnevezes, dokumentum1_1_.email_kuldes_idopont, dokumentum1_1_.id_feladat, dokumentum1_1_.rejtett, dokumentum1_1_.kezzel_iktatva, dokumentum1_1_.erkeztetoszam, dokumentum1_1_.idgeneralttartalom, dokumentum1_1_.generaltalegyezik, dokumentum2_.id, dokumentum2_.modositva, dokumentum2_.idadattartalom, dokumentum2_.nev, dokumentum2_.dokumentumnev, dokumentum2_.letrehozasideje, dokumentum2_.leiras, dokumentum2_.meret, dokumentum2_.iktatando, dokumentum2_.alairassablon, dokumentum2_.groovyclosure, dokumentum2_.bejovo, dokumentum2_.datasource
  Hash Cond: (dokumentum1_.iddokumentumsablon = dokumentum2_.id)
  Buffers: shared hit=389 read=6
  ->  Nested Loop Left Join  (cost=1.23..24228.66 rows=1357 width=691) (actual time=0.383..3.993 rows=427 loops=1)
        Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain, dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet, dokumentum1_1_.id, dokumentum1_1_.irat, dokumentum1_1_.idugylet, dokumentum1_1_.iddokumentumtipus, dokumentum1_1_.tomoritett, dokumentum1_1_.idszulodokumentum, dokumentum1_1_.idiktatas, dokumentum1_1_.idszotarstatusz, dokumentum1_1_.dokumentumnev, dokumentum1_1_.formatum, dokumentum1_1_.altipus_megnevezes, dokumentum1_1_.email_kuldes_idopont, dokumentum1_1_.id_feladat, dokumentum1_1_.rejtett, dokumentum1_1_.kezzel_iktatva, dokumentum1_1_.erkeztetoszam, dokumentum1_1_.idgeneralttartalom, dokumentum1_1_.generaltalegyezik
        Buffers: shared hit=373 read=6
        ->  Nested Loop Left Join  (cost=0.81..15331.59 rows=1357 width=493) (actual time=0.379..2.802 rows=427 loops=1)
              Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain, dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet
              Buffers: shared hit=364 read=6
              ->  Append  (cost=0.38..3987.37 rows=1357 width=404) (actual time=0.373..1.495 rows=427 loops=1)
                    Buffers: shared hit=352 read=6
                    ->  Index Scan using ix_ftr_ugyletadat_idugylet on public.ftr_ugyletadat ugyletadat0_  (cost=0.38..8.39 rows=1 width=2246) (actual time=0.228..0.228 rows=0 loops=1)
                          Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain
                          Index Cond: (ugyletadat0_.idugylet = 24633477::numeric)
                          Buffers: shared hit=16
                    ->  Index Scan using ix_ugyletadat_kulcs_ugylet_35 on public.ftr_ugyletadat_35 ugyletadat0__1  (cost=0.70..3978.98 rows=1356 width=403) (actual time=0.142..0.769 rows=427 loops=1)
                          Output: ugyletadat0__1.id, ugyletadat0__1.modositva, ugyletadat0__1.idugylet, ugyletadat0__1.iddokumentum, ugyletadat0__1.kulcs, ugyletadat0__1.osztaly, ugyletadat0__1.doublevalue, ugyletadat0__1.longvalue, ugyletadat0__1.datevalue, ugyletadat0__1.stringvalue, ugyletadat0__1.stringvalue2, ugyletadat0__1.megjegyzesdomain
                          Index Cond: (ugyletadat0__1.idugylet = 24633477::numeric)
                          Buffers: shared hit=336 read=6
              ->  Index Scan using pk_dokumentum on public.ftr_dokumentum dokumentum1_  (cost=0.43..8.35 rows=1 width=90) (actual time=0.001..0.001 rows=0 loops=427)
                    Output: dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet
                    Index Cond: (ugyletadat0_.iddokumentum = dokumentum1_.id)
                    Buffers: shared hit=12
        ->  Index Scan using nir_bgadokumentum_pkey on public.nir_bgadokumentum dokumentum1_1_  (cost=0.43..6.55 rows=1 width=198) (actual time=0.001..0.001 rows=0 loops=427)
              Output: dokumentum1_1_.id, dokumentum1_1_.irat, dokumentum1_1_.idugylet, dokumentum1_1_.iddokumentumtipus, dokumentum1_1_.tomoritett, dokumentum1_1_.idszulodokumentum, dokumentum1_1_.idiktatas, dokumentum1_1_.idszotarstatusz, dokumentum1_1_.dokumentumnev, dokumentum1_1_.formatum, dokumentum1_1_.altipus_megnevezes, dokumentum1_1_.email_kuldes_idopont, dokumentum1_1_.id_feladat, dokumentum1_1_.rejtett, dokumentum1_1_.kezzel_iktatva, dokumentum1_1_.erkeztetoszam, dokumentum1_1_.idgeneralttartalom, dokumentum1_1_.generaltalegyezik
              Index Cond: (dokumentum1_.id = dokumentum1_1_.id)
              Buffers: shared hit=9
  ->  Hash  (cost=19.54..19.54 rows=654 width=1677) (actual time=1.143..1.143 rows=692 loops=1)
        Output: dokumentum2_.id, dokumentum2_.modositva, dokumentum2_.idadattartalom, dokumentum2_.nev, dokumentum2_.dokumentumnev, dokumentum2_.letrehozasideje, dokumentum2_.leiras, dokumentum2_.meret, dokumentum2_.iktatando, dokumentum2_.alairassablon, dokumentum2_.groovyclosure, dokumentum2_.bejovo, dokumentum2_.datasource
        Buckets: 1024  Batches: 1  Memory Usage: 95kB
        Buffers: shared hit=13
        ->  Seq Scan on public.ftr_dokumentumsablon dokumentum2_  (cost=0.00..19.54 rows=654 width=1677) (actual time=0.003..0.479 rows=692 loops=1)
              Output: dokumentum2_.id, dokumentum2_.modositva, dokumentum2_.idadattartalom, dokumentum2_.nev, dokumentum2_.dokumentumnev, dokumentum2_.letrehozasideje, dokumentum2_.leiras, dokumentum2_.meret, dokumentum2_.iktatando, dokumentum2_.alairassablon, dokumentum2_.groovyclosure, dokumentum2_.bejovo, dokumentum2_.datasource
              Buffers: shared hit=13
Total runtime: 6.160 ms

And the bad plan:

Hash Left Join  (cost=133873.91..182462.47 rows=13552 width=4027) (actual time=2091.267..2092.158 rows=427 loops=1)
  Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain, dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet, dokumentum1_1_.id, dokumentum1_1_.irat, dokumentum1_1_.idugylet, dokumentum1_1_.iddokumentumtipus, dokumentum1_1_.tomoritett, dokumentum1_1_.idszulodokumentum, dokumentum1_1_.idiktatas, dokumentum1_1_.idszotarstatusz, dokumentum1_1_.dokumentumnev, dokumentum1_1_.formatum, dokumentum1_1_.altipus_megnevezes, dokumentum1_1_.email_kuldes_idopont, dokumentum1_1_.id_feladat, dokumentum1_1_.rejtett, dokumentum1_1_.kezzel_iktatva, dokumentum1_1_.erkeztetoszam, dokumentum1_1_.idgeneralttartalom, dokumentum1_1_.generaltalegyezik, dokumentum2_.id, dokumentum2_.modositva, dokumentum2_.idadattartalom, dokumentum2_.nev, dokumentum2_.dokumentumnev, dokumentum2_.letrehozasideje, dokumentum2_.leiras, dokumentum2_.meret, dokumentum2_.iktatando, dokumentum2_.alairassablon, dokumentum2_.groovyclosure, dokumentum2_.bejovo, dokumentum2_.datasource
  Hash Cond: (dokumentum1_.iddokumentumsablon = dokumentum2_.id)
  Buffers: shared hit=26918
  ->  Hash Right Join  (cost=133846.19..182348.06 rows=13552 width=2350) (actual time=2090.104..2090.457 rows=427 loops=1)
        Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain, dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet, dokumentum1_1_.id, dokumentum1_1_.irat, dokumentum1_1_.idugylet, dokumentum1_1_.iddokumentumtipus, dokumentum1_1_.tomoritett, dokumentum1_1_.idszulodokumentum, dokumentum1_1_.idiktatas, dokumentum1_1_.idszotarstatusz, dokumentum1_1_.dokumentumnev, dokumentum1_1_.formatum, dokumentum1_1_.altipus_megnevezes, dokumentum1_1_.email_kuldes_idopont, dokumentum1_1_.id_feladat, dokumentum1_1_.rejtett, dokumentum1_1_.kezzel_iktatva, dokumentum1_1_.erkeztetoszam, dokumentum1_1_.idgeneralttartalom, dokumentum1_1_.generaltalegyezik
        Hash Cond: (dokumentum1_1_.id = dokumentum1_.id)
        Buffers: shared hit=26905
        ->  Seq Scan on public.nir_bgadokumentum dokumentum1_1_  (cost=0.00..42477.22 rows=1594522 width=198) (actual time=0.003..1028.006 rows=1472449 loops=1)
              Output: dokumentum1_1_.id, dokumentum1_1_.irat, dokumentum1_1_.idugylet, dokumentum1_1_.iddokumentumtipus, dokumentum1_1_.tomoritett, dokumentum1_1_.idszulodokumentum, dokumentum1_1_.idiktatas, dokumentum1_1_.idszotarstatusz, dokumentum1_1_.dokumentumnev, dokumentum1_1_.formatum, dokumentum1_1_.altipus_megnevezes, dokumentum1_1_.email_kuldes_idopont, dokumentum1_1_.id_feladat, dokumentum1_1_.rejtett, dokumentum1_1_.kezzel_iktatva, dokumentum1_1_.erkeztetoszam, dokumentum1_1_.idgeneralttartalom, dokumentum1_1_.generaltalegyezik
              Buffers: shared hit=26532
        ->  Hash  (cost=133676.79..133676.79 rows=13552 width=2152) (actual time=3.282..3.282 rows=427 loops=1)
              Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain, dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet
              Buckets: 2048  Batches: 1  Memory Usage: 61kB
              Buffers: shared hit=370
              ->  Nested Loop Left Join  (cost=255.38..133676.79 rows=13552 width=2152) (actual time=0.327..2.772 rows=427 loops=1)
                    Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain, dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet
                    Buffers: shared hit=370
                    ->  Append  (cost=254.95..31274.87 rows=13552 width=2062) (actual time=0.319..1.468 rows=427 loops=1)
                          Buffers: shared hit=358
                          ->  Bitmap Heap Scan on public.ftr_ugyletadat ugyletadat0_  (cost=254.95..27295.89 rows=12196 width=2246) (actual time=0.247..0.247 rows=0 loops=1)
                                Output: ugyletadat0_.id, ugyletadat0_.modositva, ugyletadat0_.idugylet, ugyletadat0_.iddokumentum, ugyletadat0_.kulcs, ugyletadat0_.osztaly, ugyletadat0_.doublevalue, ugyletadat0_.longvalue, ugyletadat0_.datevalue, ugyletadat0_.stringvalue, ugyletadat0_.stringvalue2, ugyletadat0_.megjegyzesdomain
                                Recheck Cond: (ugyletadat0_.idugylet = 24633477::numeric)
                                Buffers: shared hit=16
                                ->  Bitmap Index Scan on ix_ftr_ugyletadat_idugylet  (cost=0.00..251.90 rows=12196 width=0) (actual time=0.117..0.117 rows=431 loops=1)
                                      Index Cond: (ugyletadat0_.idugylet = 24633477::numeric)
                                      Buffers: shared hit=4
                          ->  Index Scan using ix_ugyletadat_kulcs_ugylet_35 on public.ftr_ugyletadat_35 ugyletadat0__1  (cost=0.70..3978.98 rows=1356 width=403) (actual time=0.069..0.692 rows=427 loops=1)
                                Output: ugyletadat0__1.id, ugyletadat0__1.modositva, ugyletadat0__1.idugylet, ugyletadat0__1.iddokumentum, ugyletadat0__1.kulcs, ugyletadat0__1.osztaly, ugyletadat0__1.doublevalue, ugyletadat0__1.longvalue, ugyletadat0__1.datevalue, ugyletadat0__1.stringvalue, ugyletadat0__1.stringvalue2, ugyletadat0__1.megjegyzesdomain
                                Index Cond: (ugyletadat0__1.idugylet = 24633477::numeric)
                                Buffers: shared hit=342
                    ->  Index Scan using pk_dokumentum on public.ftr_dokumentum dokumentum1_  (cost=0.43..7.55 rows=1 width=90) (actual time=0.001..0.001 rows=0 loops=427)
                          Output: dokumentum1_.id, dokumentum1_.modositva, dokumentum1_.idadattartalom, dokumentum1_.iddokumentumsablon, dokumentum1_.nev, dokumentum1_.letrehozasideje, dokumentum1_.leiras, dokumentum1_.meret, dokumentum1_.idugylet
                          Index Cond: (ugyletadat0_.iddokumentum = dokumentum1_.id)
                          Buffers: shared hit=12
  ->  Hash  (cost=19.54..19.54 rows=654 width=1677) (actual time=1.128..1.128 rows=692 loops=1)
        Output: dokumentum2_.id, dokumentum2_.modositva, dokumentum2_.idadattartalom, dokumentum2_.nev, dokumentum2_.dokumentumnev, dokumentum2_.letrehozasideje, dokumentum2_.leiras, dokumentum2_.meret, dokumentum2_.iktatando, dokumentum2_.alairassablon, dokumentum2_.groovyclosure, dokumentum2_.bejovo, dokumentum2_.datasource
        Buckets: 1024  Batches: 1  Memory Usage: 95kB
        Buffers: shared hit=13
        ->  Seq Scan on public.ftr_dokumentumsablon dokumentum2_  (cost=0.00..19.54 rows=654 width=1677) (actual time=0.004..0.468 rows=692 loops=1)
              Output: dokumentum2_.id, dokumentum2_.modositva, dokumentum2_.idadattartalom, dokumentum2_.nev, dokumentum2_.dokumentumnev, dokumentum2_.letrehozasideje, dokumentum2_.leiras, dokumentum2_.meret, dokumentum2_.iktatando, dokumentum2_.alairassablon, dokumentum2_.groovyclosure, dokumentum2_.bejovo, dokumentum2_.datasource
              Buffers: shared hit=13
Total runtime: 2092.613 ms

Other strange path, The pg_stat_user_tables n_live_tup is almost constant about 2,5 and somtimes this value setted to reltuples.

Yesterday a vacuum anaylze reduced n_live_tup to 500. This value was 2,5 million. And pg_class reltuples is 0 or this n_live_tup. If this value is low, the plan is fast... Why increae this value? This table has 0 record, but it is true, a lot of instert a delete run on that table.


VACUUM ANALYZE VERBOSE

Code: 0 SQL State: 00000 --- vacuuming "public.ftr_ugyletadat"
Code: 0 SQL State: 00000 --- index "pk_ugyletadat_x" now contains 3502 row versions in 21 pages
Code: 0 SQL State: 00000 --- index "idx_ugyletadat_dok_x" now contains 3502 row versions in 17 pages
Code: 0 SQL State: 00000 --- index "ix_ftr_ugyletadat_idugylet" now contains 3502 row versions in 14 pages
Code: 0 SQL State: 00000 --- "ftr_ugyletadat": found 0 removable, 3502 nonremovable row versions in 71 out of 71 pages
Code: 0 SQL State: 00000 --- vacuuming "pg_toast.pg_toast_463274"
Code: 0 SQL State: 00000 --- index "pg_toast_463274_index" now contains 11 row versions in 2 pages
Code: 0 SQL State: 00000 --- "pg_toast_463274": found 0 removable, 11 nonremovable row versions in 3 out of 3 pages
Code: 0 SQL State: 00000 --- analyzing "public.ftr_ugyletadat"
Code: 0 SQL State: 00000 --- "ftr_ugyletadat": scanned 71 of 71 pages, containing 0 live rows and 3502 dead rows; 0 rows in sample, 0 estimated total rows
Code: 0 SQL State: 00000 --- analyzing "public.ftr_ugyletadat" inheritance tree

said found 0 live row

and

SELECT relname, n_live_tup FROM pg_stat_user_tables where relname like 'ftr_ugyletadat'

said

ftr_ugyletadat 3364



Solution 1:[1]

I don't know why, but after an

VACUUM ANALYZE VERBOSE [TABLE]

I don't find 2 days

SELECT relname, n_dead_tup,n_live_tup,n_dead_tup,* FROM pg_stat_user_tables where relname like 'TABLE_NAME';

n_live_tup greater than 0. So the plan is fast.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 László Tóth