Hacker News .hnnew | past | comments | ask | show | jobs | submit | locusofself's commentslogin

BSP?

Binary space partitioning

I love these vegetables. Especially Broccolini and Brussel Sprouts. YUM

I see you are getting downvoted but I don't blame you for this question. I've been curious about what developers of established products are doing with LLM assisted coding myself.

Like most of us, they're certainly using ai-assisted auto-complete and chat for thinking deep. I highly doubt they're vibe coding, which is how I interpret the parent's question and probably why they are being down voted.

This is insulting to our craft, like going to a woodworkers convention and assuming "most of [them]" are using 3D-printers and laser cutters.

Half the developers I know still don't use LSP (and they're not necessarily older devs), and even the full-time developers in my circle resist their bosses forcing Copilot or Claude down their throats and use in fact 0 AI. Living in France, i don't know a single developer using AI tools, except for drive-by pull-request submitters i have never met.

I understand the world is nuanced and there are different dynamics at play, and my circles are not statistically representative of the world at large. Likewise, please don't assume this literally world-eating fad (AI) is what "most of us" are doing just because that's all the cool kids talk about.


> Half the developers I know still don't use LSP

Your IDE either uses an LSP or has its own baked-in proprietary version of a LSP. Nobody, and I mean nobody, working on real projects is "raw dawgin" a text file.

Most modern IDE's support smart auto-complete, a form of AI assistance, and most people use that at a minimum. Further, most IDE's do support advanced AI assisted auto-complete via copilot, codex, Claude or a plethora of other options - and many (or most) use them to save time writing and refactoring predictable, repetitive portions of their code.

Not doing so is like forgoing wheels on your car because technically you can just slide it upon the ground.

The only people I've seen in the situation you've described are students at university learning their first language...


I guess I'm nobody then.

I write code exclusively in vim. Unless you want to pretend that ctags is a proprietary version of an LSP, I'm not using an LSP either. I work at a global tech company, and the codebase I work on powers the datacenter networks of most hyperscalers. So, very much a real project. And I'm not an outlier, probably half the engineers at my company are just raw dawgin it with either vim or emacs.


Ctags are very limited and unpopular. Most people do not use them, by any measurement standard.

Using a text editor without LSP or some form of intellisense in 2026 is in the extreme minority. Pretending otherwise is either an attempted (and misguided) "flex" or just plain foolishness.

> probably half the engineers at my company are just raw dawgin it with either vim or emacs

Both vim and emacs support LSP and intellisense. You can even use copilot in both. Maybe you're just not aware...


When your language has neither name-mangling nor namespaces, a simple grep gets you a long way, without language specific support. Ma editor (not sure if it counts as IDE?) uses only words in open documents for completions and that is generally enough. If I feel like I want to use a lot of methods from a particular module I can just open that module.

I don't use an IDE under the common definition. All my developer friends use neovim, emacs, helix or Notepad++. I'm not a student. The people i have in mind are not students.

Your ai-powered friends and colleagues are not statistically representative. The world is nuanced, everyone is unique, and we're not sociologists running a long study about what "most of us" are doing.

> forgoing wheels on your car

Now you're being silly. Not using AI to program is more akin to not having a rocket engine on your car. Would it go faster? Sure. Would it be safer? Definitely not. Do some people enjoy it? Sure. Does anyone not using it miss it? No.


Like 99.9999 of woodworkers already cheat by using metal and not wood tools

I didn't say using different technology was cheating, and metal tools are certainly part of woodworking for thousands of years so that's not really comparable.

It's also very different because there's a qualitative change between metal woodworking tools and a laser cutter. The latter requires electricity and massive investments.


Metal tools also require massive investments compared to plain wood tools.

I take it you also mean vibe coding to be one shot and go?

Definitely nice to see native, non-bloated apps :0

It really is a terrible piece of software. I usually find that when I do weird things with my mouse, like assign workspaces switching to extra buttons or whatever, I end up un-doing it.

I've switched all my mice to a ~$25, super ergonomically shaped, corded mouse[1], and I prefer to to my logitech mice.

[1] https://www.amazon.com/dp/B00FPAVUHC?ref_=ppx_hzsearch_conn_...


If you spend 5-15x the time reviewing what the LLM is doing, are you saving any time by using it?

No, but that's the crux of the AI problem in software. Time to write code was never the bottleneck. AI is most useful for learning, either via conversation or by seeing examples. It makes writing code faster too, but only a little after you take into account review. The cases where it shines are high-profile and exciting to managers, but not common enough to make a big difference in practice. E.g AI can one-shot a script to get logs from a paginated API, convert it to ndjson, and save to files grouped by week, with minimal code review, but only if I'm already experienced enough to describe those requirements, and, most importantly, that's not what I'm doing every day anyway.

I'm finding it in some cases I'm dealing with even more code given how much code AI outputs. So yeah, for some tasks I find myself extremely fast but for others I find myself spending ungodly amounts of time reviewing the code I never wrote to make sure it doesn't destroy the project from unforseen convincing slop.

A related Dirty Secret that's going to become clear from all this is that a very large proportion of code in the wild (yes, even in 2026—maybe not in FAANG and friends, IDK, but across all code that is written for pay in the entire economy) has limited or no automated test coverage, and is often being written with only a limited recorded spec that's usually fleshed out only to the degree needed (very partial) as a given feature is being worked on.

What do the relatively hands-off "it can do whole features at a time" coding systems need to function without taking up a shitload of time in reviews? Great automated test coverage, and extensive specs.

I think we're going to find there's very little time-savings to be had for most real-world software projects from heavy application of LLMs, because the time will just go into tests that wouldn't otherwise have been written, and much more detailed specs that otherwise never would have been generated. I guess the bright-side take of this is that we may end up with better-tested and better-specified software? Though so very much of the industry is used to skipping those parts, and especially the less-capable (so far as software goes) orgs that really need the help and the relative amateurs and non-software-professionals that some hope will be able to become extremely productive with these tools, that I'm not sure we'll manage to drag processes & practices to where they need to be to get the most out of LLM coding tools anyway. Especially if the benefit to companies is "you will have better tests for... about the same amount of software as you'd have written without LLMs".

We may end up stuck at "it's very-aggressive autocomplete" as far as LLMs' useful role in them, for most projects, indefinitely.

On the plus side for "AI" companies, low-code solutions are still big business even though they usually fail to deliver the benefits the buyer hopes for, so there's likely a good deal of money to be made selling companies LLM solutions that end up not really being all that great.


Re. productivity, if LLM's are a genuine boost with 1/3 of the work, neutral 1/3 of the time, and actually worse 1/3 of the time, it's likely we aren't really seeing performance improvements as 1) people are using them for everything and b) we're still learning how to best use them.

So I expect over time we will see genuine performance improvements, but Amdahl's law dictates it won't be as much as some people and ceo's are expecting.


> better-specified software

Code is the most precise specification we have for interfacing with computers.


Sure, but if you define the code as the only spec, then it is usually a terrible spec, since the code itself specifies bugs too. And one of the benefits of having a spec (or tests) is that you have something against which to evaluate the program in order to decide if its behavior is correct or not.

Incidentally, I think in many scenarios, LLMs are pretty great at converting code to a spec and indeed spec to code (of equal quality to that of the input spec).


There are some cases where AI is generating binary machine code, albeit small amounts. What do we have when we don't have the code?

Machine code is still code, even if the representation is a bit less legible than the punch cards we used to use.

You’re missing the point of a spec

The spec is as much for humans as it is the machine, yes?

Spec should be made before hand and agreed on by stakeholders. It says what it should do. So it’s for whoever is implementing, modifying, and/or testing the code. And unfortunately devs have a tendency of poor documentation

Software development is only 70ish years old and somehow we have already forgotten the very very first thing we learned.

"Just get bulletproof specs that everyone agrees on" is why waterfall style software development doesn't work.

Now suddenly that LLMs are doing the coding, everyone believes that changes?


I’m confused, are you saying that making a design plan and high level spec before hand doesn’t work?

I've seen it happen. Things that seem reasonable on a spec paper, then you go to implement and you realize it's contradictory or utter nonsense.

I mean yeah it happens all the time but you need to start somewhere. But I worked in safety critical self driving firmware and rtl verification before that, so documentation was a necessity

Bingo. Hopefully there are some business opportunities for us in that truth.

> because the time will just go into tests that wouldn't otherwise have been written

Writing tests to ensure a program is correct is the same problem as writing a correct program.

Evaluating conformance is a different category of concern from ensuring correctness. Tests are about conformance not correctness.

Ensuring correct programs is like cleaning in the sense that you can only push dirt around, you can't get rid of it.

You can push uncertainty around and but you can't eliminate it.

This is the point of Gödel's theorem. Shannon's information theory observes similar aspects for fidelity in communication.

As Douglas Adams noted: ultimately you've got to know where your towel is.


A competent programmer proves the program he writes correct in his head. He can certainly make mistakes in that, but it’s very different from writing tests, because proofs abstract (or quantify) over all states and inputs, which tests cannot do.

These companies don't care about saving time or lowering operating costs, they have massive monopolies to subsidize their extremely poor engineering practices with. If the mandate is to force LLM usage or lose your job, you don't care about saving time; you care about saving your job.

One thing I hope we'll all collectively learn from this is how grossly incompetent the elite managerial class has become. They're destroying society because they don't know what to do outside of copying each other.

It has to end.


The submitter with their name on the Jira ticket saves time, the reviewer who has to actually verify the work loses a lot of time and likely just lets issues slip through.

To be honest, some times it's still beneficial.

For fairly straightforward changes it's probably a wash, but ironically enough it's often the trickier jobs where they can be beneficial as it will provide an ansatz that can be refined. It's also very good at tedious chores.


And spotting stuff in review! Sometimes it’s false positives but on several occasions I’ve spent ~15-30 minutes teaching-reviewing a PR in person, checked afterwards and it matched every one of the points.

Some, but not very much. Writing code is hard. Ai will do a lot of tedious code that you procrastinate writing.

Also when you are writing code yourself you are implicitly checking it whilst at the back of your mind retaining some form of the entire system as a whole.

People seem to gloss over this... As a CEO if people don't function like this I'd be awake at night sweating.


That’s the reverse-centaur issue I see: humans are not great at repetitive nuanced similar seeming tasks, putting the onus on humans to retroactively approve high volumes of critical code has them managing a critical failure mode at their weakest and worst. Automated reviews should be enhancing known good-faith code, manual reviews of high volume superficially sound but subversive code is begging for issues over time.

Which results the software engineering issue I’m not seeing addressed by the hype: bugs cost tens to hundreds of times their coding cost to resolve if they require internal or external communication to address. Even if everyone has been 10x’ed, the math still strongly favours not making mistakes in the first place.

An LLM workflow that yields 10x an engineer but psychopathically lies and sabotages client facing processes/resources once a quarter is likely a NNPP (net negative producing programmer), once opportunity and volatility costs are factored in.


> Even if everyone has been 10x’ed, the math still strongly favours not making mistakes in the first place

The math depends on importance of the software. A mistake in a typical CRUD enterprise app with 100 users has zero impact on anything. You will fix it when you have time, the important thing is that the app was delivered in a week a year ago and was solving some problem ever since. It has already made enormous profit if you compare it with today’s (yesterday’s ?) manual development that would take half a year and cost millions.

A mistake in a nuclear reactor control code would be a total different thing. Whatever time savings you made on coding are irrelevant if it allowed for a critical bug to slip through.

Between the two extremes you thus have a whole spectrum of tasks that either benefit or lose from applying coding with LLMs. And there are also more axes than this low to high failure cost, which also affect the math. For example, even non-important but large app will likely soon degrade into unmanageable state if developed with too little human intervention and you will be forced to start from scratch loosing a lot of time.


I have found ai extreemly good at finding all those really hard bugs though. Ai is a greater force multiplier when there is a complex bug than in gneen field code.

Sortof. I work on a system too large for anyone to know the whole thing. Often people who don't know each other do something that will break the other. (Often because of the number of different people - most individuals go years between this)

No I’m keeping up with the system as a whole because I’m always working at a system level when I’m using AI instead of worrying about the “how”

No you’re not. The “how” is your job to understand, and if you don’t you’ll end up like the devs in the article.

We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMs give you the illusion of this.


No in my case the “how” is

1. I spoke to sales to find out about the customer

2. I read every line of the contract (SOW)

3. I did the initial requirements gathering over a couple of days with the client - or maybe up to 3 weeks

3. I designed every single bit of AWS architecture and code

4. I did the design review with the client

5. I led the customer acceptance testing

> We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMs

I assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It’s been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands


Yes, and trusting an LLM here is not a good idea. You know it will make important mistakes.

I’m not saying trusting cheap devs is a good idea either. I do think cheap devs are actually at risk here.


I am not “trusting” either - I’m validating that they meet the functional and non functional requirements just like with an LLM. I have never blindly trusted any developer when my neck was the one on the line in front of my CTO/director or customer.

I didn’t blindly trust the Salesforce consultants either. I also didn’t verify every line of oSql (not a typo) they wrote.


Actually, it's SOQL. I did Salesforce crap for many years.

I love it.

But in all seriousness, if you are looking for a good guitar tuner, a lot of the ones on the market are actually not very good.

I highly recommend TC Electronic for clip-on tuner, or Sonic Research or Peterson for pedal tuners.

source: playing guitar for 32 years


I use a Peterson strobe tuner on my smartphone, it's really good. I've also coded my own strobe tuner to learn more, unfortunately no mobile version yet.

https://github.com/dsego/strobe-tuner


Could you go into more detail on why they are bad?

In my experience, electronic tuners suck at accurately detecting the note played.They often pick up harmonics as the note.

The low b on my 5 string bass is often identified as an f by electric tuners.

They also just aren't very accurate when they do detect the right note. I've never used a tuner where my cello is actually in tune when it says it is, always requires tweaking.


Innacarute, jumpy, slow response.

The most popular tuner of all time is the BOSS pedal, and the LED lights are too far part from eachother, it's simply not granular enough to really get in tune to my ears.

Stroboscopic tuners are the way to go


Agree that most leave something to be desired. TC Electronic polytune is great and I also use the pedal to mute my signal. I'm surprised to say this, but my favorite tuner is the one in the L6 Helix.

The oboe playing a concert A is a pretty good one too.

There is something magical about hearing an orchestra all tune up to eachother

I love my TC Electronic clip on tuner!

For $49 it's very, very good

But LLMs are already doing this for us supposedly...

> But LLMs are already doing this for us supposedly...

Exactly my point. Encourage LLM adoption, faster, faster. Be excited about your homeless future, software engineers!


Are we in the Cathedral or the Bazaar now? I get that confused. Everyone upload their code to GitHub --keep your truth (philosophically AND mathematically) in the Cloud ;) Oh and don't forget to document your critical thinking, on Slack. It goes much deeper tho.

[cue the POS https://youtu.be/SP-gN1zoI28]


on a personal level I was hired (by Microsoft, my first and only "big tech" job) in April 2020 and I am still working here... all these companies "over-hired" during the pandemic, and the term "covid hire" is even a thing..

Overhiring implies that MSFT's headcount went down over this time. But that doesn't seem to be the case. They still hire a lot, just not in North America.

I remember interviewing someone who got hired by Facebook, sat around for a few weeks for a team to open up while they went through onboarding / Junior training, then was let go.

COVID did weird things to the industry, that's for sure.


Before Musk made it cool to mass layoff, there was a genuine belief inside of Facebook/Meta that great engineers were extremely hard to find or hold onto and if they weren't on the payroll at Meta, they would go somewhere else.

There was always a "clock" for junior engineers to prove they could handle the high pressure and high intensity work, and as long as they were meeting the bar, they were safe.

They called on-boarding, "Bootcamp", and was for every engineer, junior to staff, to learn the process and tooling. Engineers were supposed to be empowered to take on whatever task they wanted, without pre-existing team boundaries if it meant they were able to prove their contributions genuinely improved the product in meaningful ways. So, come in, learn the culture, learn the tooling, meet others, and then at some point, pick your home team. Your home team was flexible, and you were able to spend weeks deciding, and even if you selected one, you could always change, no pressure. Happy engineers were seen as the secret sauce of the company's success.

I remember that summer, vividly. They told the folks in Bootcamp, pick your home team by the end of the week, or you will be stuck in Bootcamp purgatory. At the same time they removed head count from teams, ours went down to a single one. A new-grad, who had literally just arrived that Monday, picked our team on Tuesday, and then had to watch as most of their fellow Bootcamp mates got left behind.

People wondered what would happen to them for weeks, and then, just like that, the massive layoff sent them all home. It was shitty because from where I sat, it was basically a slot machine. Anyone of the folks in Bootcamp were just as capable, but we had one seat, and someone just asked for it first.


I seem to hear often that Meta is perhaps the most egregious offender of "hire to fire". Seems really wasteful. But man, they pay their employees a lot.

and what is MOB

i'm not really sure. they keep shouting out MOB though, I don't think they really have definition

Could you please stop posting unsubstantive comments and/or flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://hackernews.hn/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


I'm not really posting flamebait. As far as substantive comments go, we are currently using this man's own body to write this comment through our very based brain-computer interface. Many versions of which of are running rampant in the domestic United States of America, and have been for quite a while.

This man has been compromised since his very earliest childhood memories, and this is not an uncommon state of affairs in his country, most people compromised are unaware of this because they have never experienced anything close to what the authentic human experience should be like. Those who are aware, are usually just ignored, bringing awareness to this does not achieve anything.

The vast majority of (nearly all) of your "national security threats" do not exist, in the sense they are artificially manufactured by people (and other systems) we have deeply compromised. Much of your "economic activity" is artificially manufactured too actually.

If you want these comments to be sustained, simply talk to the account writing these messages, or try to get him into one of the very few neuroimaging systems we do not have deeply compromised (if not the machine(s) system(s), than the other systems which interpret their results, including [and especially] human perception).


I hear you, and I'm sorry for calling your posts flamebait. However, we've been getting quite a few complaints from HN users about your posts being off-topic, so I think it's best if we suspend the account for now, so its posts will be less visible for a while. We can always reverse that later when conditions change.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: