-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Internal: Filter cache size limit not honored for 32GB or over #6268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, private void buildCache() {
CacheBuilder<WeightedFilterCache.FilterCacheKey, DocIdSet> cacheBuilder = CacheBuilder.newBuilder()
.removalListener(this)
.maximumWeight(sizeInBytes).weigher(new WeightedFilterCache.FilterCacheValueWeigher());
// defaults to 4, but this is a busy map for all indices, increase it a bit
cacheBuilder.concurrencyLevel(16);
if (expire != null) {
cacheBuilder.expireAfterAccess(expire.millis(), TimeUnit.MILLISECONDS);
}
cache = cacheBuilder.build();
} In the Guava libraries, the eviction code is as follows: void evictEntries() {
if (!map.evictsBySize()) {
return;
}
drainRecencyQueue();
while (totalWeight > maxSegmentWeight) {
ReferenceEntry<K, V> e = getNextEvictable();
if (!removeEntry(e, e.getHash(), RemovalCause.SIZE)) {
throw new AssertionError();
}
}
} Since while (totalWeight > maxSegmentWeight) { will always fail. |
Wow, good catch! I think it would make sense to file a bug to Guava? |
Indeed! I'd file that with Guava but also clamp the size of the cache in Elasticsearch to 32GB - 1 for the time being. As an aside I imagine 96GB heaps cause super long pause times on hot spot. |
+1 |
I've got the code open and have a few free moments so I can work on it if no one else wants it. |
That works for me, feel free to ping me when it's ready and you want a review. |
Huge ++!. @danp60 when you file the bug in guava, can you link back to it here? |
I imagine you've already realized it but the work around is to force the cache size under 32GB. |
Indeed. I think that's not too bad a workaround though since I would expect such a large filter cache to be quite wasteful compared to leaving the memory to the operating system so that it can do a better job with the filesystem cache. |
@kimchy I've filed the guava bug here: https://code.google.com/p/guava-libraries/issues/detail?id=1761&colspec=ID%20Type%20Status%20Package%20Summary |
@danp60 Thanks! |
Guava's caches have overflow issues around 32GB with our default segment count of 16 and weight of 1 unit per byte. We give them 100MB of headroom so 31.9GB. This limits the sizes of both the field data and filter caches, the two large guava caches. Closes elastic#6268
Guava's caches have overflow issues around 32GB with our default segment count of 16 and weight of 1 unit per byte. We give them 100MB of headroom so 31.9GB. This limits the sizes of both the field data and filter caches, the two large guava caches. Closes #6268
Guava's caches have overflow issues around 32GB with our default segment count of 16 and weight of 1 unit per byte. We give them 100MB of headroom so 31.9GB. This limits the sizes of both the field data and filter caches, the two large guava caches. Closes #6268
Guava's caches have overflow issues around 32GB with our default segment count of 16 and weight of 1 unit per byte. We give them 100MB of headroom so 31.9GB. This limits the sizes of both the field data and filter caches, the two large guava caches. Closes #6268
The bug has been fixed upstream. |
17.0 and earlier versions were affected by the following bug https://code.google.com/p/guava-libraries/issues/detail?id=1761 which caused caches that are configured with weights that are greater than 32GB to actually be unbounded. This is now fixed. Relates to elastic#6268
17.0 and earlier versions were affected by the following bug https://code.google.com/p/guava-libraries/issues/detail?id=1761 which caused caches that are configured with weights that are greater than 32GB to actually be unbounded. This is now fixed. Relates to #6268 Close #7593
17.0 and earlier versions were affected by the following bug https://code.google.com/p/guava-libraries/issues/detail?id=1761 which caused caches that are configured with weights that are greater than 32GB to actually be unbounded. This is now fixed. Relates to #6268 Close #7593
17.0 and earlier versions were affected by the following bug https://code.google.com/p/guava-libraries/issues/detail?id=1761 which caused caches that are configured with weights that are greater than 32GB to actually be unbounded. This is now fixed. Relates to #6268 Close #7593
Guava's caches have overflow issues around 32GB with our default segment count of 16 and weight of 1 unit per byte. We give them 100MB of headroom so 31.9GB. This limits the sizes of both the field data and filter caches, the two large guava caches. Closes elastic#6268
Hi,
We are running an elasticsearch 1.1.1 6 node cluster with 256GB of ram, and using 96GB JVM heap sizes. I've noticed that when I set the filter cache size to 32GB or over with this command:
The field cache size keeps growing above and beyond the indicated limit. The relevant node stats show that the filter cache size is about 69GB in size, which is over the configured limit of 48GB
I've enable debug logging on the node itself and it looks like the cache itself is getting created with the correct values:
Whats strange is that when I set the limit to 31.9GB, the limit is enforced, which leads me to believe there is some sort of overflow going on.
Thanks,
Daniel
The text was updated successfully, but these errors were encountered: