I've solved most of my problems by following these guidelines: one worktree for each code domain, using git sparse-checkout to limit the context (e.g. one worktree for the Rust core, one for Swift macOS, one for Swift iOS, etc.), and putting all the rules in claude.md (or agents.md). This way I get "containerization", lower context, and faster search across the codebase. After that, I only install the skills that actually matter for the context.
Maybe it's just conservation of angular momentum: if you look closely, you can see that the spin axis and the way she applies the force aren't perfectly tangential to the object's axis, so the lower point of the T-shape is "bouncing" between many positions. If I'm not mistaken, this is explained by the "intermediate axis theorem".
I'm sad because you explained it so well. I was expecting something life changing explanation and lot of discussions around it but the simplicity and truth in your answer annihilated the conundrum. I can think of the body having a kind of oscillation in its "angular momentum" around some mean position of the angular momentum. Thank you.
Probably a naive question, but: couldn't you precompute some vector representation of the string once, and reduce collation to a vector comparison? Basically move the cost upfront and get back to the "fast" byte-comparison case?
Well, that's sort of like what the "sort key" of a given string is: a vector of its nonzero primary-level collation weights, then secondaries, then tertiaries... And once you know what collation options you're using (tailoring etc.), you could just compute the sort key of each string as you encounter it and cache that. Then you would be able to reach a collation decision rapidly for any two strings whose sort keys you already have. It does boil down to vector comparison at that point.
I experimented with adding an LRU cache to my Rust UCA implementation, but I saw essentially no performance benefit (on the workloads I had), and I decided the feature wasn't worth the complexity and removed it.
Something I found about Unicode collation is that, once the fast paths are added, they get hit a surprisingly large percentage of the time. I'm thinking in particular of the way that performant UCA implementations build sort keys lazily, stopping once a collation decision is reached. The average "point of difference" is at the primary level and within a few characters of the start of each string. Only a small portion of the sort keys ever get built.
I am definitely interested in finding more ways to avoid work in the collation routine. Many times, I've had what I thought was a clever idea and found that it didn't pan out in benchmarks. Thank you for your comment!
Ah ok, the lazy construction part is what I was missing, if you basically never build the full key, there's nothing to cache. Makes sense now why the LRU didn't help. I'll think about it over the nextdays.
Emacs is powerful, but the complexity overhead of managing a custom trust layer could easily become a major maintenance bottleneck for average users. Worth considering, but the friction point is significant.
reply