-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Allow terms to run as filters #68871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This adds yet another terms aggregator that uses `term` filters to run in similar speed to specialized low cardinality terms aggregator. It is mostly useful as a stepping stone for some additional optimizations that we can make later. So it doesn't have to be faster on its own. Just not *slower*. And its about the same speed.
Pinging @elastic/es-analytics-geo (Team:Analytics) |
Whether or not this is a performance increase is kind of interesting. When I build the index at the same time as I run the queries it looks like it is a substantial speed up in the low cardinality case:
with a small slow down in the high cardinality case:
Edit: I fixed this slow down before committing. See #68871 (comment). But when you run the tests alone it isn't a big change at all:
So in my mind this isn't a performance improvement. not really.there are too many ghosts in those numbers. But its sort of not important because in a follow up we can use term statistics to speed up the |
assertEquals("d", result.getBuckets().get(4).getKeyAsString()); | ||
assertEquals(1L, result.getBuckets().get(4).getDocCount()); | ||
assertTrue(AggregationInspectionHelper.hasValue(result)); | ||
}, fieldType); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I had to do some large-ish changes to this anyway I decided to move it to our normal testCase
method.
} | ||
return doc; | ||
} | ||
|
||
public void testStringIncludeExclude() throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The low cardinality aggregator doesn't kick in at all if there is include/exclude. But the filter thing loves it.
* generally faster when possible but takes more memory because we have | ||
* to build the filters. | ||
*/ | ||
static final long MAX_ORDS_TO_TRY_FILTERS = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've just pulled this number out of a hat. My wild guess is we spend 300 bytes on each ord to use filters
. I'll bet we could make filters
use less memory when we're building it internally. But maybe we bump it then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd leave a TODO here to tune this number, or a comment that it isn't currently well chosen, or something. I'd hate to come back to this some months from now and think we had a good reason for picking it that we've since forgotten.
d4eb967
to
21c5eeb
Compare
Two of those errors look kind of bad. Will look soon. |
Ooooh! One of those failures found a performance issue! Neat. |
And the others found a bug in the optimizations that |
I've opened #68930 to fix the failure. |
Turns out normalizers are fine
My latest fixed as have removed the slow down in the high cardinality case and found a modest improvement:
|
@not-napoleon, this one should be ready for you! The last failures I saw on it were problems building the bwc artifact so I'm inclined to think those'll go away on their own. |
Maybe we should host that locally? |
@elasticmachine, retest this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few nits, looks good
@@ -40,14 +40,14 @@ public AdaptingAggregator( | |||
* agg tree. Thisis how it has always been and some aggs rely on it. | |||
*/ | |||
this.delegate = delegate.apply(subAggregators.fixParent(this)); | |||
assert this.delegate.parent() == parent : "invalid parent set on delegate"; | |||
assert this.delegate == null || this.delegate.parent() == parent : "invalid parent set on delegate"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this an assert instead of an actual check and exception? Is the performance cost of checking this in production that high?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this.delegate == null
condition requires some explanation here. It basically produces non functional adapter since we never check to see if delegate is null and it is final.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I originally added it as an assertion because I figured it'd only fail because of a programming error. But it can be a hard check.
And you are right about the null dance being weird. I'll have another look soon and see if I can avoid it. I like the simplicity of buildOrNull
kinds of methods but it comes at a funny cost, that is for sure.
@@ -907,5 +921,5 @@ private void oversizedCopy(BytesRef from, BytesRef to) { | |||
/** | |||
* Predicate used for {@link #acceptedGlobalOrdinals} if there is no filter. | |||
*/ | |||
private static final LongPredicate ALWAYS_TRUE = l -> true; | |||
static final LongPredicate ALWAYS_TRUE = l -> true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: This seems like a weird place for a very generic predicate like this to live.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a bit, but its sort of central to how the thing knows to optimize. Usually in java you don't care and just do l -> true
inline without the constant. I'm not super happy having the constant at all but it speeds things up somewhat to take the ALWAYS_TRUE route.
* generally faster when possible but takes more memory because we have | ||
* to build the filters. | ||
*/ | ||
static final long MAX_ORDS_TO_TRY_FILTERS = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd leave a TODO here to tune this number, or a comment that it isn't currently well chosen, or something. I'd hate to come back to this some months from now and think we had a good reason for picking it that we've since forgotten.
One of the tests that I added in elastic#68871 worked about 99.7% of the time but on some seeds failed to generate the right buckets because on those seeds we would collect each segment with its own aggregator and get bad counts. This "bad counts" problem is known in the terms aggregator - its the price we pay for distributed work. But we can work around it either by forcing all the docs into a single segment or by collecting all of the buckets on the shard. We want to test the code path where don't collect all buckets on the shard so we opt for the former. Closes elastic#69413
One of the tests that I added in #68871 worked about 99.7% of the time but on some seeds failed to generate the right buckets because on those seeds we would collect each segment with its own aggregator and get bad counts. This "bad counts" problem is known in the terms aggregator - its the price we pay for distributed work. But we can work around it either by forcing all the docs into a single segment or by collecting all of the buckets on the shard. We want to test the code path where don't collect all buckets on the shard so we opt for the former. Closes #69413
One of the tests that I added in #68871 worked about 99.7% of the time but on some seeds failed to generate the right buckets because on those seeds we would collect each segment with its own aggregator and get bad counts. This "bad counts" problem is known in the terms aggregator - its the price we pay for distributed work. But we can work around it either by forcing all the docs into a single segment or by collecting all of the buckets on the shard. We want to test the code path where don't collect all buckets on the shard so we opt for the former. Closes #69413
This speeds up the `terms` agg in a very specific case: 1. It has no child aggregations 2. It has no parent aggregations 3. There are no deleted documents 4. You are not using document level security 5. There is no top level query 6. The field has global ordinals 7. There are less than one thousand distinct terms That is a lot of restirctions! But the speed up pretty substantial because in those cases we can serve the entire aggregation using metadata that lucene precomputes while it builds the index. In a real rally track we have we get a 92% speed improvement, but the index isn't *that* big: ``` | 90th percentile service time | keyword-terms-low-cardinality | 446.031 | 36.7677 | -409.263 | ms | ``` In a rally track with a larger index I ran some tests by hand and the aggregation went from 2200ms to 8ms. Even though there are 7 restrictions on this, I expect it to come into play enough to matter. Restriction 6 just means you are aggregating on a `keyword` field. Or an `ip`. And its fairly common for `keyword`s to have less than a thousand distinct values. Certainly not everywhere, but some places. I expect "cold tier" indices are very very likely not to have deleted documents at all. And the optimization works segment by segment - so it'll save some time on each segment without deleted documents. But more time if the entire index doesn't have any. The optimization builds on #68871 which translates `terms` aggregations against low cardinality fields with global ordinals into a `filters` aggregation. This teaches the `filters` aggregation to recognize when it can get its results from the index metadata. Rather, it creates the infrastructure to make that fairly simple and applies it in the case of the queries generated by the terms aggregation.
This speeds up the `terms` agg in a very specific case: 1. It has no child aggregations 2. It has no parent aggregations 3. There are no deleted documents 4. You are not using document level security 5. There is no top level query 6. The field has global ordinals 7. There are less than one thousand distinct terms That is a lot of restirctions! But the speed up pretty substantial because in those cases we can serve the entire aggregation using metadata that lucene precomputes while it builds the index. In a real rally track we have we get a 92% speed improvement, but the index isn't *that* big: ``` | 90th percentile service time | keyword-terms-low-cardinality | 446.031 | 36.7677 | -409.263 | ms | ``` In a rally track with a larger index I ran some tests by hand and the aggregation went from 2200ms to 8ms. Even though there are 7 restrictions on this, I expect it to come into play enough to matter. Restriction 6 just means you are aggregating on a `keyword` field. Or an `ip`. And its fairly common for `keyword`s to have less than a thousand distinct values. Certainly not everywhere, but some places. I expect "cold tier" indices are very very likely not to have deleted documents at all. And the optimization works segment by segment - so it'll save some time on each segment without deleted documents. But more time if the entire index doesn't have any. The optimization builds on elastic#68871 which translates `terms` aggregations against low cardinality fields with global ordinals into a `filters` aggregation. This teaches the `filters` aggregation to recognize when it can get its results from the index metadata. Rather, it creates the infrastructure to make that fairly simple and applies it in the case of the queries generated by the terms aggregation.
This speeds up the `terms` agg in a very specific case: 1. It has no child aggregations 2. It has no parent aggregations 3. There are no deleted documents 4. You are not using document level security 5. There is no top level query 6. The field has global ordinals 7. There are less than one thousand distinct terms That is a lot of restirctions! But the speed up pretty substantial because in those cases we can serve the entire aggregation using metadata that lucene precomputes while it builds the index. In a real rally track we have we get a 92% speed improvement, but the index isn't *that* big: ``` | 90th percentile service time | keyword-terms-low-cardinality | 446.031 | 36.7677 | -409.263 | ms | ``` In a rally track with a larger index I ran some tests by hand and the aggregation went from 2200ms to 8ms. Even though there are 7 restrictions on this, I expect it to come into play enough to matter. Restriction 6 just means you are aggregating on a `keyword` field. Or an `ip`. And its fairly common for `keyword`s to have less than a thousand distinct values. Certainly not everywhere, but some places. I expect "cold tier" indices are very very likely not to have deleted documents at all. And the optimization works segment by segment - so it'll save some time on each segment without deleted documents. But more time if the entire index doesn't have any. The optimization builds on #68871 which translates `terms` aggregations against low cardinality fields with global ordinals into a `filters` aggregation. This teaches the `filters` aggregation to recognize when it can get its results from the index metadata. Rather, it creates the infrastructure to make that fairly simple and applies it in the case of the queries generated by the terms aggregation.
This allows many of the optimizations added in elastic#63643 and elastic#68871 to run on aggregations with sub-aggregations. This should: * Speed up `terms` aggregations on fields with less than 1000 values that also have sub-aggregations. Locally I see 2 second searches run in 1.2 seconds. * Applies that same speedup to `range` and `date_histogram` aggregations but it feels less impressive because the point range queries are a little slower to get up and go. * Massively speed up `filters` aggregations with sub-aggregations that don't have a `parent` aggregation or collect "other" buckets. Also save a ton of memory while collecting them.
This allows many of the optimizations added in #63643 and #68871 to run on aggregations with sub-aggregations. This should: * Speed up `terms` aggregations on fields with less than 1000 values that also have sub-aggregations. Locally I see 2 second searches run in 1.2 seconds. * Applies that same speedup to `range` and `date_histogram` aggregations but it feels less impressive because the point range queries are a little slower to get up and go. * Massively speed up `filters` aggregations with sub-aggregations that don't have a `parent` aggregation or collect "other" buckets. Also save a ton of memory while collecting them.
This allows many of the optimizations added in elastic#63643 and elastic#68871 to run on aggregations with sub-aggregations. This should: * Speed up `terms` aggregations on fields with less than 1000 values that also have sub-aggregations. Locally I see 2 second searches run in 1.2 seconds. * Applies that same speedup to `range` and `date_histogram` aggregations but it feels less impressive because the point range queries are a little slower to get up and go. * Massively speed up `filters` aggregations with sub-aggregations that don't have a `parent` aggregation or collect "other" buckets. Also save a ton of memory while collecting them.
This allows many of the optimizations added in #63643 and #68871 to run on aggregations with sub-aggregations. This should: * Speed up `terms` aggregations on fields with less than 1000 values that also have sub-aggregations. Locally I see 2 second searches run in 1.2 seconds. * Applies that same speedup to `range` and `date_histogram` aggregations but it feels less impressive because the point range queries are a little slower to get up and go. * Massively speed up `filters` aggregations with sub-aggregations that don't have a `parent` aggregation or collect "other" buckets. Also save a ton of memory while collecting them.
When the `terms` agg is at the top level it can run as a `filters` agg instead because that is typically faster. This was added in elastic#68871 and we mistakely made it so that a bucket without any hits could take up a slot on the way back to the coordinating node. You could trigger this by having a fairly precise `size` on the terms agg and a top level filter. This fixes the issue by properly mimicing the regular terms aggregator in the "as filters" version: only send back buckets without any matching documents if the min_doc_count is 0. Closes elastic#70449
When the `terms` agg is at the top level it can run as a `filters` agg instead because that is typically faster. This was added in #68871 and we mistakely made it so that a bucket without any hits could take up a slot on the way back to the coordinating node. You could trigger this by having a fairly precise `size` on the terms agg and a top level filter. This fixes the issue by properly mimicing the regular terms aggregator in the "as filters" version: only send back buckets without any matching documents if the min_doc_count is 0. Closes #70449
When the `terms` agg is at the top level it can run as a `filters` agg instead because that is typically faster. This was added in elastic#68871 and we mistakely made it so that a bucket without any hits could take up a slot on the way back to the coordinating node. You could trigger this by having a fairly precise `size` on the terms agg and a top level filter. This fixes the issue by properly mimicing the regular terms aggregator in the "as filters" version: only send back buckets without any matching documents if the min_doc_count is 0. Closes elastic#70449
When the `terms` agg is at the top level it can run as a `filters` agg instead because that is typically faster. This was added in #68871 and we mistakely made it so that a bucket without any hits could take up a slot on the way back to the coordinating node. You could trigger this by having a fairly precise `size` on the terms agg and a top level filter. This fixes the issue by properly mimicing the regular terms aggregator in the "as filters" version: only send back buckets without any matching documents if the min_doc_count is 0. Closes #70449
@nik9000 I'm observing a performance regression in production, going from ~9 ms to ~170 ms for queries that use StringTermsAggregatorFromFilters instead of GlobalOrdinalsStringTermsAggregator. How could I assist to provide data to reproduce this in a benchmark? |
@RubieV Sorry about that! We found it a few weeks ago and I never linked it into all the places. Check out the long term fix at #74260. There's a short term fix in 7.13.2 as well: #73620. I'd certainly be curious if #74260 doesn't fix it for you. My goal with #73620 was to give folks a "big off switch" so they could avoid the anti-optimization without having to perform a full version upgrade. My goal with #74260 was to hopefully obsolete the setting introduced in #73620. I won't remove the setting for a while though, just in case. |
This adds yet another terms aggregator that uses
term
filters to runin similar speed to specialized low cardinality terms aggregator. It is
mostly useful as a stepping stone for some additional optimizations that
we can make later. So it doesn't have to be faster on its own. Just
slower. And its about the same speed.