The key idea here seems to be that if you’re grouping on a column on a related table you can do your main aggregation by grouping on the foreign key id on the primary table and use that as a proxy for the data on the related table that you’re actually grouping by.
In the examples given, it’s much faster, but is that mostly due to the missing indexes? I’d have thought that an optimal approach in the colour example would be to look at the product.color_id index, get the counts directly from there and you’re pretty much done.
I have a feeling that Postgres doesn’t make that optimisation (I’ve looked before, but it was older Postgres). And I guess depending on the aggregation maybe it’s not useful in the general case. Maybe in this new world it _can_ make that optimisation?
Anyway, as ever, pg just getting faster is always good.
> In the examples given, it’s much faster, but is that mostly due to the missing indexes? I’d have thought that an optimal approach in the colour example would be to look at the product.color_id index, get the counts directly from there and you’re pretty much done.
So I tried to test this (my intuition being that indexes wouldn't change much, at best you could just do an index scan instead of a seq scan), and I couldn't understand the plans I was getting, until I realized that the query in the blog post has a small error:
> AND c1.category_id = c1.category_id
should really be
> AND p.category_id = c1.category_id
otherwise we're doing a cross-product on the category. Probably doesn't really change much, but still a bit of an oopsie. Anyway, even with the right join condition an index only reduces execution time by about 20% in my tests, through an index scan.
That is part of the key idea, yes. It's more elaborate, because it can split the aggregate - it can do part of it before the join, and finalize it after the join. Similarly to what we do for parallel queries.
As for indexes, it can help, but not in this particular example - the "code" tables are tiny, and the planner adds Memoize nodes anyway, so it acts like an ad hoc index.
Indexes are more of a complementary improvement, not an alternative to this optimization (i.e. neither makes the other unnecessary). FWIW in this case the indexes won't help very much - if you use more data in the code tables, it'll use a hash join, not nested loop / merge join.
That doesn't mean we couldn't do better with indexes, there probably are smart execution strategies for certain types of queries. But indexes also come with quite a bit of overhead (even in read-only workloads).
1. Index-only scans on t_product.{category,color} indices, summing each value
2. Lookup the names of those values in their parent tables, generate output rows
If so, I suspect there are two reasons why it might not do that:
Given the relatively small size of the t_product table (23 bytes overhead + 1 byte padding + int4 + int4 + 16 bytes text + [I think] 1 byte varlena = 49 bytes/row), it will be fairly well bin-packed into pages on the heap, consuming roughly 170 pages, assuming 8 KiB default, and default fillfactor of 100%). That trivially fits into a single segment file on-disk, and is a very easy sequential scan.
If it does a sequential scan on the heap, it doesn’t have to check the Visibility Map, because it already has that information in the heap itself, which avoids a second (albeit small) lookup.
Happy for someone who knows more about Postgres to correct me if I’m wrong, though!
> In the examples given, it’s much faster, but is that mostly due to the missing indexes?
You're saying “the missing indexes” as if you could add indexes for every join you're ever doing and that this would be faster than a hash join. For many systems, that's not feasible nor very performant; and depending on selectivity, hash join would often be better than an index lookup anyway.
The biggest win from early aggregation is that you can reduce the number of rows significantly before you go join in other things (which would be a win even in nested-loop index lookup joins; smaller joins are nearly always better along every axis).
In the examples given, it’s much faster, but is that mostly due to the missing indexes? I’d have thought that an optimal approach in the colour example would be to look at the product.color_id index, get the counts directly from there and you’re pretty much done.
I have a feeling that Postgres doesn’t make that optimisation (I’ve looked before, but it was older Postgres). And I guess depending on the aggregation maybe it’s not useful in the general case. Maybe in this new world it _can_ make that optimisation?
Anyway, as ever, pg just getting faster is always good.