Most people's mental model of Claude Code is that "it's just a TUI" but it should really be closer to "a small game engine".
For each frame our pipeline constructs a scene graph with React then
-> layouts elements
-> rasterizes them to a 2d screen
-> diffs that against the previous screen
-> finally uses the diff to generate ANSI sequences to draw
We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written.
This is just the sort of bloated overcomplication I often see in first iteration AI generated solutions before I start pushing back to reduce the complexity.
Usually, after 4-5 iterations, you can get something that has shed 80-90% of the needless overcomplexification.
My personal guess is this is inherent in the way LLMs integrate knowledge during training. You always have a tradeoff in contextualization vs generalization.
So the initial response is often a plugged together hack from 5 different approaches, your pushbacks provide focus and constraints towards more inter-aligned solution approaches.
Ok I’m glad I’m not the only one wondering this. I want to give them the benefit of the doubt that there is some reason for doing it this way but I almost wonder if it isn’t just because it’s being built with Claude.
Counterpoint: Vim has existed for decades and does not use a bloated React rendering pipeline, and doesn't corrupt everything when it gets resized, and is much more full featured from a UI standpoint than Claude Code which is a textbox, and hits 60fps without breaking a sweat unlike Claude Code which drops frames constantly when typing small amounts of text.
Yes, I'm sure it's possible to do better with customized C, but vim took a lot longer to write. And again, fullscreen apps aren't the same as what Claude Code is doing, which is erasing and re-rendering much more than a single screenful of text.
It's possible to handle resizes without all this machinery, most simply by clearing the screen and redrawing everything when a resize occurs. Some TUI libraries will automatically do this for you.
Programs like top, emacs, tmux, etc are most definitely not implemented using this stack, yet they handle resizing just fine.
That doesn't work if you want to preserve scrollback behavior, I think. It only works if you treat the terminal as a grid of characters rather than a width-elastic column into which you pour information from the top.
Yes yes I'm familiar with the tweet. Nonetheless they drop frames all the time and flicker frequently. The tweet itself is ridiculous when counterpoints like Vim exist, which is much higher performance with much greater complexity. They don't even write much of what the tweet is claiming. They just use Ink, which is an open-source rendering lib on top of Yoga, which is an open-source Flexbox implementation from Meta.
What? Technology has stopped making sense to me. Drawing a UI with React and rasterizing it to ANSI? Are we competing to see what the least appropriate use of React is? Are they really using React to draw a few boxes of text on screen?
There is more than meets the eye for sure. I recently compared a popular TUI library in Go (Bubble Tea) to the most popular Rust library (Ratatui). They use significantly different approaches for rendering. From what I can tell, neither is insane. I haven’t looked to see what Claude Code uses.
As a point of comparison, the Radxa Orion O6 shipped a year ago as a 12 core ARMv9 board on same form factor and TDP, for $100 less, with 5x the single core performance (and including a competent iGPU, NPU and VPU). These are very much developer/tinkerer only boards as is.
You can still get the 32GB variant direct from Radxa off AliExpress for about $412 USD. Much more than launch costs (probably due to cost of ram), but that's actually still cheaper than the Milk-V Titan. The Titan ships with empty RAM slots, and you'd wind up paying another ~$200 to get 2x16GB of DDR4, which is slower than the soldered on LPDDR5 on the Orion.
If you don't need the full PCIe slot, there's also smaller SBC boards (either exactly micro-ITX or similar footprint) using the same SoC: The OrangePi 6 Plus and Radxa O6N. The OPi is available readily for $260 with 32GB LPDDR5, though admittedly the RAM is lower spec than what ships on the Orion O6.
These are my favorite sub $500 SBCs available today. Great Linux support and you need to move up to something like the LattePanda Sigma at over $600 to get something outclassing them in a similar form-factor, but then there's no option for directly slotting a full-size GPU either.
Did you not bother to check or are prices being segregated by market?
I'm in the US and when I visit the Radxa AliExpress listing (both yesterday and just now) the 32 GB variant is priced at $644 (includes import fees) plus $24 shipping. The 16 GB variant is $547 and the 8 GB is $479.
The OPi6+ 32 GB is $300 from Amazon locally or $415 plus shipping from AliExpress. (The irony of referring to Amazon as local is not lost on me.)
O6N is over $400 without any installed RAM from what I could find (making it $600 minimum given current RAM prices).
I'd say the inverse is true: OLEDs are the best in low light, as they generally dim well and black means zero illumination of the pixel. Author is ill-informed. Also, OLED burn-in is a non-issue with current displays in any normal situation (e.g. not a kiosk or arcade or other sort of always-on static dashboard).
No, those are generally not always-on in normal use. People let screens shut off, open up fullscreen apps, etc. Most OLED firmwares also have subtle pixel shifting and pixel refresh on shutdown routines, as well as very conservative brightness settings. OLEDs in normal use are actually less susceptible to color shift deterioration than LCDs in normal use.
Believe me, I have read that post/comment quite a few times. There are actually Pi 4 hats for sale for audiophiles (that seem to believe that you need 54000000.000 MHz system clock or whatever it is for Pi4 (Pi3 is 19.2 MHz) for optimal audio) that have an OCXO on them. But in another comment I said I'm not sure my soldering skills are that good.
Adafruit sells a DS3231 module with standard 2.54mm pin headers. I'm using one on a Beaglebone Black that I setup as my NTP/PTP server (using work based on your work from 2021, so thank you!). No soldering required.
I don't have heaps of experience or the steadiest hands, but I'd be comfortable doing a mod like this cleanly now. One good tip is to get your work piece in a position where you can securely rest the blade of your hand on the table or something secure. You want to minimize the leverage and distance between a secure rest point and your work tip.
So say I have a 4TB USB SSD from a few years ago, that's been sitting unpowered in a drawer most of that time. How long would it need to be powered on (ballpark) for the full disk refresh to complete? Assume fully idle.
(As a note: I do have a 4TB USB SSD which did sit in a drawer without being touched for a couple of years. The data was all fine when I plugged it back in. Of course, this was a new drive with very low write cycles and stored climate controlled. Older worn out drive would probably have been an issue.) Just wondering how long I should keep it plugged in if I ever have a situation like that so I can "reset the fade clock" per se.
the most basic solution that will work for every filesystem and every type of block device without even mounting anything, but won't actually check much except device-level checksums:
sudo pv -X /dev/sda
or even just:
sudo cat /dev/sda >/dev/null
and it's pretty inefficient if the device doesn't actually have much data, because it also reads (and discards) empty space.
for copy-on-write filesystems that store checksums along with the data, you can request proper integrity checks and also get the nicely formatted report about how well that went.
for btrfs:
sudo btrfs scrub start -B /
or zfs:
sudo zpool scrub -a -w
for classic (non-copy-on-write) filesystems that mostly consist of empty space I sometimes do this:
sudo tar -cf - / | cat >/dev/null
the `cat` and redirection to /dev/null is necessary because GNU tar contains an optimization that doesn't actually read anything when it detects /dev/null as the target.
Just as a note, and I checked that it's not the case with the GNU coreutils: on some systems, cp (and maybe cat) would mmap() the source file. When the output is the devnull driver, no read occurs because of course its write function does nothing... So, using a pipe (or dd) maybe a good idea in all cases (I did not check the current BSDs).
Yeah, I’ve used vatiations of the “get frontier models to cross-check and refine each others work” pattern for years now and it really is the path to the best outcomes in situations where you would otherwise hit a wall or miss important details.
It’s my approach in legal as well. Claude formulates its draft, then it prompts codex and gemini for theirs. Claude then makes recommendations for edits to its draft based on others. Gemini’s plan is almost always the worst, but even it frequently has at least one good point to make.
The name is in keeping with a lineage of animal tools for ad hoc page manipulation in Firefox. First was Aardvark, then Platypus. https://github.com/dvogel/AardvarkDuex
I was an original early user of Aardvark. These tools have remained obscure, but with a cult following because they’re such a quick and easy way to rip up a page to your liking. They were the direct inspiration for modern browser dom selector tools.
For hairy edge cases, uBlock Origin’s element picker is the gold standard for manipulating pages.
I use the "Nerdy" tone along with the Custom Instructions below to good effect:
"Please do not try to be personal, cute, kitschy, or flattering. Don't use catchphrases. Stick to facts, logic, reasoning. Don't assume understanding of shorthand or acronyms. Assume I am an expert in topics unless I state otherwise."
reply