You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
33.2.1 and old versions use asMap() and ComputingValueReference
Description
the bug is specifically caused since a failure loading attempt results in ComputingValueReference keeps reference to previous loading attempt, repeating this behavior builds reference chain. When the specific entry is expired and to be evicted via preWriteCleanup(), the underlying copyEntry() call would evaluate the reference chain, if there is number of references in the chain exceeds a threshold, evaluating it would lead to the StackOverflowError, if there is other cache keys share the same bucket in the segment, and ahead of this key in the ReferenceEntry linked list, the accessQueue & the linked list will be out of sync, that access queue points to a new entry while the reference linked list failed to be updated as the result of the StackOverflow error. Future access to the segment would all fail with "AssertError" as out of sync.
Suggestion workaround: move away from asMap(), considering deprecating the support as the value reference chain is defected in design.
Example
`publicvoidreproduceStackOverFlow() throwsInterruptedException {
longexpireAfterAccess = 10000L;
Cache<Integer, String> cache = CacheBuilder.newBuilder()
.expireAfterAccess(Duration.ofMillis(expireAfterAccess))
.build();
intkey = 100;
intmaxChainLength = 65536; //tested with JVM 2MB stack size, this number may go higher with larger stack size settingsfor (inti = 0; i < maxChainLength; i++) {
try {
cache.asMap().computeIfAbsent(key, (k) -> {
thrownewRuntimeException(); //simulate loading error, can happen with no value for a specific key or a service call fail
});
} catch (Exceptione) {
//ok, eat it
}
}
//let the entry expireThread.sleep(expireAfterAccess);
try {
cache.asMap().computeIfAbsent(key, (k) -> "foobar");
Assertions.fail();
} catch (Errorerror) {
error.printStackTrace();
if (!(errorinstanceofStackOverflowError)) {
Assertions.fail();
}
}
}`
Sounds like maybe this line needs to be wrapped in a try-catch that calls removeLoadingValue if an exception is thrown. It currently retains the failed. I saw something like this in #5438. I would generally avoid the compute methods in Guava's cache since it was not designed for that and made their addition fairly problematic. You might consider using Caffeine instead.
Guava Version
33.2.1 and old versions use asMap() and ComputingValueReference
Description
the bug is specifically caused since a failure loading attempt results in ComputingValueReference keeps reference to previous loading attempt, repeating this behavior builds reference chain. When the specific entry is expired and to be evicted via preWriteCleanup(), the underlying copyEntry() call would evaluate the reference chain, if there is number of references in the chain exceeds a threshold, evaluating it would lead to the StackOverflowError, if there is other cache keys share the same bucket in the segment, and ahead of this key in the ReferenceEntry linked list, the accessQueue & the linked list will be out of sync, that access queue points to a new entry while the reference linked list failed to be updated as the result of the StackOverflow error. Future access to the segment would all fail with "AssertError" as out of sync.
Suggestion workaround: move away from asMap(), considering deprecating the support as the value reference chain is defected in design.
Example
Expected Behavior
cache can be managed
Actual Behavior
failed with StackOverflowError
Packages
com.google.common.cache
Platforms
No response
Checklist
I agree to follow the code of conduct.
I can reproduce the bug with the latest version of Guava available.
The text was updated successfully, but these errors were encountered: