HN2new | past | comments | ask | show | jobs | submit | v3xro's commentslogin

No, you outsource it because it's not your core competency. I think humans should be able to do anything and not narrowly specialise as narrow specialisation leads to tunnel vision. Sometimes you need to outsource to someone because of legal reasons (and rightly so, mostly because the complexities involved do require someone who is a professional in that area). Can some things be simplified? Of course they can, and there are many barriers that prevent such simplification. But it's absolutely insane to say - nah, we don't need to think at all, and something else can do all the work.

Nobody said "we don't need to think at all" though. The statement was "not having to think", or rephrased: "being able to choose how much to think or what to think about".

> Not being sneered at by some insecure kid is nice.

How very adult of you.


I have found that having a humble, generous, and non-caustic approach to other people, has been very good for my personal mental health (and career).

I have spent my entire career, being the dumbest guy in the room, and I’m not exactly a dunce. It can sometimes be quite humbling, but I’ve had great opportunities to learn.

People will often be willing to go out of their way to help you understand, if you treat them with respect; even if they are being jerks.

Life’s too short, to be spending in constant battle.


The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.

I think this is coming, alongside professional licensure for "software engineers". Every public-facing project will need someone to put a literal stamp of approval on the code, and regardless whether Claude or Codex wrote the bulk of it, it'll be that person's head on a pike when something goes wrong.

This isn't what many of us probably would have wanted, but I think the public blowback when "AI-coded" systems start failing is going to drive us there. (Note to passing hype-men: I did not say they will fail at higher rates than human-coded systems! I happen to believe this, but it is not germane to the argument - only the public perception matters here.)


This already exists. They’re called software audits, and the more risk-averse your customers are they more required they become.

This really resonated with me, thank you for writing it <3

> Companies value velocity and new launches and shipping first at all costs because of course they do; it’s table stakes. Speed of delivery is basically the number one corporate value of every organization whether they admit to it or not.

Yeah this one is again one of the causes of where we are today (alongside profit extraction, or perhaps because of it). It used to be the case that you would find companies that would offer quality at a slightly higher price, and people would be more than willing to pay for it. Now the feeling is that this is all marketing driven and there is no 'higher quality' because everyone gave up and went after speed of delivery. And well, as the old saying goes, that's valuable when you're catching fleas.


There's nothing to recover from, what are you even talking about? I'm not a token user (and I can't make predictions about the future and whether it will force me to use token but still). That the industry is collectively having a delusion about what constitutes good software (in all senses of the word - functionality and consequences for society) is clear to see, something I too fear we might never recover from, but I stand quite clearly on the side of people not of corporations hoping to extract more more more.

Really? I will most likely be using IntelliJ 2027.x with whatever the latest plugins are for the programming languages I write by hand.


Thanks. You made me smile... Yeah I think this really depends on the companies management decisions. Currently I observe some sort of gold rush, buying AI and expecting more and more. I think that in two years it will be sorted out, people will realise how much did we really benefit from using AI, and what died in that time


> If I can somehow hate a machine that has basically stopped me from having to write boring boilerplate code, of course others are going to hate it!

Poor author, never tried expressive high-level languages with metaprogramming facilities that do not result in boring and repetitive boilerplate.


The rule of metaprogramming is that it ends up just as convoluted and full of edge cases as regular code, just without a nice way to debug. The rule is also that it always seems like a fantastic idea at first and will solve so many isues.

I've been programming since 1994. I've seen a lot. I almost always end up despising any metaprogramming system and wish we'd just kept things simpler even if it meant boilerplate.


Honestly, this. The mainstream coding culture has spent decades dealing with shoehorning stateful OOP into distributed and multithreaded contexts. And now we have huge piles of code, getters and setters and callbacks and hooks and annotation processers and factories and dependency injection all pasted on top of the hottest coding paradigm of the 90's. It's too much to manage, and now we feel like we need AI to understand it all for us.

Meanwhile, nobody is claiming vast productivity gains using AI for Haskell or Lisp or Elixir.


I mean, I find that LLMs are quite good with Lisp (Clojure) and I really like the abstraction levels that it provides. Pure functions and immutable data mean great boundary points and strong guarantees to reason about my programs, even if a large chunk of the boring parts are auto-coded.

I think there's lots of people like me, it's just that doing real dev work is orthogonal (possibly even opposed) to participating in the AI hype cycle.


What hype? I have and will continue to be anti-BigAI from the very beginning. Until the mechanism is no longer that of a probabilistic model, the data gathering that of massive copyright infringment and the runtime that of a "let us burn more fossil fuels to power as many transistors as we can" I will continue to avoid it without any regrets about missed "productivity" or whatever.


That's not a technical problem though is it? I don't see legal scenarios where unverified machine translation is acceptable - you need to get a certified translator to sign off on any translations and I also don't see how changing that would be a good thing.


I was briefly considering trying to become a professional translator, and I partly didn't pursue it because of the huge use of MT. I predict demand for human translators will continue to fall quickly unless there are some very high-profile incidents related to MT errors (and humans' liability for relying on them?). Correspondingly the supply of human translators may also fall as it appears like a less credible career option.


I think the point here is that, while such a translation wouldn't be admissible in court, many of us already used machine translation to read some legal agreement in a language we don't know.


> many of us already used machine translation to read some legal agreement in a language we don't know.

Have we? Most of us? Really? When?


Most people don't have to deal with documents in foreign languages in the first place.

But for those that do, yes, machine translation use is widespread if only as a first pass.


I know I did for rent contracts and know other people that did the same. And I said many, not most.


Would be nice if every article about LLM/AI had that as a tag so you could skip past them...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: