I will confess to skimming by the end. But I don’t think they explained how they solved the cache issue except to say they rewrote the software in Rust, which is pretty vague.
Was all the code they rewrote originally in Lua? So was it just a matter of moving from a dynamic language with pointer-heavy data structures to a static language with value types and more control over memory layout? Or was there something else going on?
The gains in lower memory footprint and lower demands on memory bandwidth from rewriting stuff to Rust are very real, and they're going to matter a lot with DRAM prices being up 5x or more. It doesn't surprise me at all that they would be getting these results.
I guess I am a sucker for stories about redesigning data structures, and I'd have liked more detail on that front. Also, they talked about Rust's greater memory safety; it would have been nice to know whether there were specific language features that played into the cache difference or whether it just made the authors comfortable using a systems language in this application and that made the difference.
> Scribus is an open source project but I’m not sure how the quality is in that.
I am not a typographer, and I’ve never used it in a professional capacity, but v1.6 (early 2024) improved Scribus a lot. I’ve used it and liked it for some personal projects for years, but the improved typography in 1.6 is big.
Years ago I wrote a toy Lisp implementation in Objective-C, ignoring Apple’s standard library and implementing my own class hierarchy. At that point it was basically standard C plus Smalltalk object dispatch, and it was a very cool language for that type of project.
I haven’t used it in Apple’s ecosystem, so maybe I am way off base here. But it seems to me that it was Apple’s effort to evolve the language away from its systems roots into a more suitable applications language that caused all the ugliness.
> I really miss the days of the fairness-doctrine.
There are so many ways to game the system, whom do you trust to enforce it? I don’t trust my own “side” to do so, and I sure as heck don’t trust the other side.
I think that diff algorithms have more in common with traditional, “lower” textual criticism than with the sort of source criticism canjobear is pondering.
It’s interesting that they’re organized by date. On an intuitive level, that makes sense. But so many of the dates are hotly debated, and reorganizing the list would produce such a different impression, that it’s a surprising choice.
I am not a scholar of such things, but a quick glance at the documents I am familiar with suggests that the date ranges represent uncertainty within the compiler’s point of view. That’s reasonable, but when it’s linked out of context it’s not immediately obvious that it doesn’t reflect the range of debate in the broader secular scholarship, let alone secular and conservative religious scholarship taken together. So caveat lector.
That said, the breadth of documents linked here is really impressive.
Historical documents should really have four dates:
1) Oldest full manuscript to be carbon dated (or similarly rigorous scientific dating)
2) Oldest fragments to be carbon dated
3) Oldest citations
4) Estimated date from internal factors within the text
The initial methods would serve as an objective upper bound for age, and the later would give a more accurate subjective view.
Units based on base 12 or base 2, as U.S. standard measures tend to be, are easier to divide in many ways.
Now if we used base 12 numbers instead of base 10, and we had a system of units based on that, I bet we’d have the best of both worlds. No idea if Napoleon could have imposed base 12 arithmetic on most of Europe the way he did metric, though.
I realize you are tongue in cheek, but I hope people respect the logical limits of this sort of thing.
Years ago, there were some development tools coming out of the Ruby world – SASS for sure, and Vagrant if I remember correctly – whose standard method of installation was via a Ruby gem. Ruby on Rails was popular, and I am sure that for the initial users this had almost zero friction. But the tools began to be adopted by non-Ruby-devs, and it was frustrating. Many Ruby libraries had hardcoded file paths that didn’t jive with your distro’s conventions, and they assumed newer versions of Ruby than existed in your package repos. Since then I have seen the same issue crop up with PHP and server-side JavaScript software.
It’s less of a pain today because you can spin up a container or VM and install a whole language ecosystem there, letting it clobber whatever it wants to clobber. But it’s still nicer when everything respects the OS’s local conventions.
Could you explain your reasoning? I don’t see any moral difference between deliberately limiting compatibility from the peripheral side and doing so from the “computer” side (i.e., iPhone, iPad, Macintosh). One type of device may produce more inadvertent incompatibilities than the other, but that’s different.
Besides, I think this will create surprise and confusion for less technical users. In my experience, many will blame the incompatibility on whichever device is new, without understanding who is gating out whom. And even for technical users, consider CarPlay and Android Auto: From the phone’s perspective, the car is a peripheral, and that makes sense; but lots of people will still consider the car the “core device.”
Beside being a neat font in its own right, Iosevka allows for custom builds with different settings, selection from a bunch glyph variants, and custom ligature choices. It's pretty incredible.
Was all the code they rewrote originally in Lua? So was it just a matter of moving from a dynamic language with pointer-heavy data structures to a static language with value types and more control over memory layout? Or was there something else going on?
reply