Its not about prompting; its about planning and plan reviewing before implementing; I sometimes spend days iterating on specification alone, then creating an implementation roadmap and then finally iterating on the implementation plan before writing a single line of code. Just like any formal development pipeline.
I started doing this a while ago (months) precisely because of issues as described.
On the other hand,analyzing prompts and deviations isnt that complex.. just ask Claude :)
That is generically my experience as well. Claude half-assing work or skipping stuff because "takes too much time" is something I've been experiencing since I started using it (May 2025). Forcing it to create and review and implementation plan, and then reviewing the implementation cross-referenced with the plan almost always produces consistent results in my case.
Rule of thumb, its not. Common stuff like address randomization is a recent default, afaik still doesnt have random process ids, and the base permissions arent stellar. However I would prefer jails any day of the week vs the clusterf** that are namespaces and cgroups.
Right, because linux security == init system used by some distros. My experience with FreeBSD may be somewhat dated (I've used it since the 4.x days, provided commercial support for it for more than 15 years), an that is not my experience - at all. Obviously, it depends on the threat model you are considering and how far you want to go. The default install does not have (or had) sane security defaults, at least comparing to your random $ystemd linux distro; try installing both and give local shell to a red team and see how fast they get root access.
I don't want to be that guy, but zfs on freebsd was kind-of-experimental until around 2012;AFAIK in 2012 (Freebsd 9) root on zfs was a manual process, and not that easy to upgrade. Root on zfs is somewhat "recent"- it took years to get to the installer. My company at the time was basically a bsd shop (openbsd and freebsd), and the opensolaris(12?) version was still quite ahead. Still have my opensolaris tshirt of "first 5000" :)
> In some cases we have even seen crashes in non-memory instructions (e.g. MOV ZR, R1), which implicates misexecution: a fault in the CPU (or a bug in the telemetry bookkeeping, I suppose).
Thats the thing. Bit flips impact everything memory-resident - that includes program code. You have no way of telling what instruction was actually read when executing the line your instrumentation may say corresponds to the MOV; or it may have been a legit memory operation, but instrumentation is reporting the wrong offset. There are some ways around it, but - generically - if a system runs a program bigger than the processor cache and may have bit flips - the output is useless, including whatever telemetry you use (because it is code executed from ram and will touch ram).
Because it's SRAM, and therefore it still can lose its electrons because we're working with cells a few atoms thick? The loss is not necessarily in L1 (where it's replaced frequently), but in L3 which now has memory comparable to PCs in the early 2000s (and can have its data "stuck" in the same physical area for minutes).
ECC are traditionally slower, quite more complex, and they dont completely eliminate the problem (most memories correct 1 bit per word and detect 2 bits per word). They make sense when environmental factors such as flaky power, temperature or RF interference can be easily discarded - such as a server room. But yeah, I agree with you, as ECC solves like 99% of the cases.
Thing is, every reported bug can be a bit flip. You can actually in some cases have successful execution, but bitflips in the instrumentation reporting errors that dont exist.
The amount of overhead a few bits of ECC has is basically a rounding error, and even then, the only time the hardware is really doing extra work is when bit errors occur and correction has to happen.
The main overhead is simply the extra RAM required to store the extra bits of ECC.
ECC are "slower" because they are bought by smart people who expect their memory to load the stored value, rather than children who demand racing stripes on the DIMMs.
The actual RAM chips on a ECC DIMM are exactly the same as a non-ECC DIMM, there's just an extra 1/2/4 chips to extend to 72 bit words.
The main reason ECC RAM is slower is because it's not (by default) overclocked to the point of stability - the JEDEC standard speeds are used.
The other much smaller factors are:
* The tREFi parameter (refresh interval) is usually double the frequency on ECC RAM, so that it handles high-temperature operation.
* Register chip buffers the command/address/control/clock signals, adding a clock of latency the every command (<1ns, much smaller than the typical memory latency you'd measure from the memory controller)
* ECC calculation (AMD states 2 UMC cycles, <1ns).
ECC keeps your bits safe from random flips to a ridiculously large factor. You can run the memory at high consumer speeds, giving up some of that safety margin, while still being more reliable than everything else in your computer.
And there's non-random bit errors that can hit you at any speed, so it's not like going slow guarantees safety.
ECC is actually slower. The hardware to compute every transaction is correct does add a slight delay, but nothing compared to the delay of working on corrupted data.
Going to be downvoted, but I call bullshit on this. Bitflips are frequent (and yes ECC is an improvement but does not solve the problem), but not that frequent. One can either assume users that enabled telemetry are an odd bunch with flaky hardware, or the implementation isnt actually detecting bitflips (potentially, as the messages indicate), but a plathora of problems. Having a 1/10 probability a given struct is either processed wrong, parsed wrong or saved wrong would have pretty severe effects in many, many scenarios - from image editing to cad. Also, bitflips on flaky hardware dont choose protection rings - it would also affect the OS routines such as reading/writing to devices and everything else that touches memory. Yup, i've seen plenty of faulty ram systems (many WinME crashes were actually caused by defective ram sticks that would run fine with W98), it doesnt choose browsers or applications.
You should look at about:crashes and see if there's any commonality in the causes, or bugs associated with them (though often bugs won't be associated with the crash if it isn't filed from crash-stats or have the crash signature in the bug)
Maybe you should check your memory? I recently started to get quite a lot of Firefox crashes, and definitely contributed to this statistic. In the end, the problem was indeed memory - crashes stopped after I tuned down some of the timings. And I used this RAM for a few years with my original settings (XMP profile) without issue.
I experience them in several different devices; On my main device, I have hundreds of chrome tabs and often many workloads running that would be completely corrupt with random bit flips. I'm not discarding the possibility of faulty RAM completely, I just take the measurement of the tweet with a huge grain of salt - after all, I still remember when the FF team constantly denied - for more than half a decade - that the browser had serious memory leak problems, so its not like there isn't a history of pointing out other causes for FF crashes.
How can you possibly be this confident if you don't know the number of times Firefox was run and number of bug reports submitted? Say it's run 100,000,000 times, 1000 reports are submitted, and 10 are bit flips. Seems reasonable. You're misinterpreting what they are saying.
10% of 1000 isnt 10; its 100.And no, its not reasonable - the main reason is that you cannot reliably tell if something is a bit flip or not remotely, because bitflips affect both code and data. Also, 10% of a semi-obscure specific category of failures seems to indicate that the population submitting crashes isn't random enough. I'm a layman in statistics, but this doesn't seem correct, at least not without concrete details on the kinds of bugs being reported and the methodology used. Claiming 10% and being able to demonstrate 10% are different things - and the tweet thread indicates that is this clickbait - something in the lines of "may potentially be a bit-flip". Well, every error may be a bit flip.
Quick answer: No.
Long answer: its the opposite; as an example, can use claude code to generate, build and debug ESP32 code for a given purpose; suddenly everyone can build smart gizmos without having to learn c/c++ and having knowledge of a ton of libraries.
I have Arduino and raspberry Pi boards. I am perfectly capable of hand writing code that runs on these machines. But they are sitting in the drawer gathering dust, because I don't have a use case -- everything I could possibly do with them is either not actually useful on a daily basis, or there are much better & reliable solutions for the actual issue. I literally spent hours going through other people's projects (most of which are very trivial), and decided that I have better things to do with my time. Lots and lots of people have the same issue.
And Claude Code is not going to change a single bit of that.
So, because you don't see value in it, you assume its the same for everyone. Got it.
Also, its not about if there are better or more reliable options; that's the opposite of the maker mentality - you do it because it is useful, it is fun or just because you enjoy doing it.
Such as designing some light fixture, printing it, and illuminating it with an esp32 and some ws2812 leds. Yah you could spend an afternoon coding color transitions. Or use claude code for that.
If you are vibe coding something it's not for the experience of learning, challenging yourself, or accomplishment - it's purely about the end result, the artefact. So asking "what is the purpose of this thing" is actually quite relevant in respect to vibe coding.
Coding is just one of many possible skills you use as a maker; do you think everyone into 3d printing is a stm32 programmer or designs and manufactures their own pcb's? Of course not. Software is just a component. If your kick is software, great; but it doesnt need to be. Also, just because you use an LLM it doesnt mean one is not learning; how do you think you learned how to speak?
I think the reality is that the maker movement slowed down not because it’s hard to learn c++ but because people don’t care enough. Will maybe twice as many people participate now? Sure. But that’ll still be a small fraction of people.
I don't think it has slowed down; in fact,I think it has grown in the last few years. Sure, it is a niche - and will probably always be - but one never had such a low barrier of entry to build stuff and be creative; you have plenty of very powerful chips, somewhat usable SDKs, a ton of COTS ready to use components ranging from gps sensors to rotary encoders, and you can design your own pcbs and order them cheap from China; you can also design enclosures and 3d print parts in your own home with precision that was only accessible to specialized companies 15 years ago. LLMs are a great help not only on the code generation part, but also on the design part - as an example, I sometimes use ChatGPT to generate openscad functions, and it isn't half-bad.
Not sure I see it like that. Micropython removes most of the rough edges of doing embedded C.
If you prefer no code then I suggest ESPHome for your ESP IoT projects.
The other day I built a quick PoC to control 1024 rgb leds using RMT (esp32) and a custom protocol I was developing. Im pretty sure micropython would suck for that.
The other day I also developed a RGB-RGBW converter using a rp2040; claude did most of the assembly, so instead of taking a couple of days, it took a couple of hours.
I don't prefer no code; my point is software is a barrier on embedded systems, and if I - someone who can actually program in c/c++, python and assembly, see huge benefits in using LLMs, for someone at an entry level it is a life changer.
if youre using a pico, you can use PIO to have a bit more power. (I use it to control stepper motors with a smooth accel/decel ramp. Its doable with RMT, but not as easy.
Sure, and if it didn't is not complicated to add a new module. Thing is, the module does not support DMA. So, for the specific use case I gave, its not a good fit.
I'd take vibecoded iot code any day vs the typical hot mess of poorly written code by non-experts following online tutorials and the casual stackoverflow copy-paste :)
>You have no idea what codebases I've seen and worked in, so don't assume I have not.
Why not? You've been quite confortable assuming things so far, without actually contributing anything of substance to the conversation. Your opinions may even be well-formed, but if they are, your communication skills clearly aren't.
So, how has been your experience using LLMs as a maker (the actual topic) or in the context of IoT development (the topic I was replying to)? Mine has been quite positive, ranging from ensuring specific blocks of assembly code are deterministic (instead of having to check dozens of pages in a manual, and count instructions at every adjustment), to building both code, test fixtures and build infrastructure, to generating documentation, to actually hunt and fix security and logic issues on older codebases.
When people read "vibecode" they assume a clueless intern operating Cursor without any idea of what he's doing (in part because of the overhype of misshaps of LLM-generated code), opposed to the old fox with decades of experience that knows every detail by heart. Thing is, the clueless intern will produce much better code with LLMs than without (and fewer defects, too), and the old fox will produce much more because it will delegate some tasks to coding agents instead of less senior team mates,and have results in hours, not weeks.
You're not paying for the used server; youre paying for the used server and the replacement; thats how you can grow a business without assuming huge capex investments upfront.
I think its still common for colo spaces to have installation services for shipped-in hardware. You buy a server, configure it and ship it to the DC.
This happens to me all the time. I always ask claude to re-check the generated docs and test each example/snippet, sometimes more than once; more often than not, there are issues.
I started doing this a while ago (months) precisely because of issues as described.
On the other hand,analyzing prompts and deviations isnt that complex.. just ask Claude :)
reply