Interesting! Reading the headline before the article, my brain immediately thought of "jitter".
I wonder if you could extend the `In-process synchronization` example so that when `CompleteableFuture.supplyAsync()` thunk first does a random sleep (where the sleep time is bounded by an informed value based on the expensive query execution time), then it checks the cache again, and only if the cache is still empty does it proceed with the rest of the example code.
That way you (stochastically) get some of the benefits of distributed locking w/o actually having to do distributed locking.
Of course that only works if you are ok adding in a bit of extra latency (which should be ok; you're already on the non-hot path), and that there still may be more than 1 query issued to fill the cache.
Northguard doesn’t look like it’s been open sourced? I’d be curious to know how it compares to Apache Pulsar [0]. I feel like I see some similarities reading the LI blog post.
Even loose coupling is still coupling. For the things that have to be coupled having the code organized in the same place, being able to easily read the source for “the other side”, make a change and verify that dependees test still pass, etc is immensely powerful.
Bartosz links to it in the Further Reading section, but wanted to highlight the Wristwatch Revival YouTube channel[0] as well. Really great content and very understandable after reading the article!
I would actually phrase that as “LSP optimizes for understanding” (which is of course important for writing code).
For example, when doing code reviews I routinely pull the branch down and look at the diff in context of the rest of the code: “this function changed, who calls it?”, “what other tests are in this file?”, etc. An IDE/LSP is a powerful tool for understanding what is happening in a codebase regardless of author.
I wonder if you could extend the `In-process synchronization` example so that when `CompleteableFuture.supplyAsync()` thunk first does a random sleep (where the sleep time is bounded by an informed value based on the expensive query execution time), then it checks the cache again, and only if the cache is still empty does it proceed with the rest of the example code.
That way you (stochastically) get some of the benefits of distributed locking w/o actually having to do distributed locking.
Of course that only works if you are ok adding in a bit of extra latency (which should be ok; you're already on the non-hot path), and that there still may be more than 1 query issued to fill the cache.