Hacker News .hnnew | past | comments | ask | show | jobs | submit | beachy's commentslogin

Or the young person who needs a job and doesn't yet have OP's fully formed understanding of exactly where the line is - apparently gambling bad/ ad tech OK.

If they got a job at one of those companies, they could've gotten a job elsewhere. It's a specific choice, and "but I'm only 25, how could I possibly be expected to know right from wrong" isn't really an excuse.

Software and bridges are entirely different.

If I need a bridge, and there's a perfectly beautiful bridge one town over that spans the same distance - that's useless to me. Because I need my own bridge. Bridges are partly a design problem but mainly a build problem.

In software, if I find a library that does exactly what I need, then my task is done. I just use that library. Software is purely a design problem.

With agentic coding, we're about to enter a new phase of plenty. If everyone is now a 10x developer then there's going to be more software written in the next few years than in the last few decades.

That massive flurry of creativity will move the industry even further from the calm, rational, constrained world of engineering disciplines.


> Bridges are partly a design problem but mainly a build problem.

I think this vastly underestimates how much of the build problem is actually a design problem.

If you want to build a bridge, the fact one already exists nearby covering a similar span is almost meaningless. Engineering is about designing things while using the minimal amount of raw resources possible (because cost of design is lower than the cost of materials). Which means that bridge in the other town is designed only within its local context. What are the properties of the ground it's built on? What local building materials exist? Where local can be as small as only a few miles, because moving vast quantities of material of long distances is really expensive. What specific traffic patterns and loadings it is built for? What time and access constraints existed when it was built?

If you just copied the design of a bridge from a different town, even one only a few miles up the road, you would more than likely end up with a design that either won't stand up in your local context, or simply can't be built. Maybe the other town had plenty of space next to the location of the bridge, making it trivial to bring in heavy equipment and use cranes to move huge pre-fabbed blocks of concrete, but your town doesn't. Or maybe the local ground conditions aren't as stable, and the other towns design has the wrong type of foundation resulting in your new bridge collapsing after a few years.

Engineering in other disciplines don't have the luxury of building for a very uniform, tightly controlled target environment where it's safe to make assumptions that common building blocks will "just work" without issue. As a result engineering is entirely a design problem, i.e. how do you design something that can actually be built? The building part is easy, there's a reason construction contractors get paid comparatively little compared to the engineers and architects that design what they're building.


Software packages are more complicated than you make them out to be. Off the top of my head:

- license restrictions, relicensing

- patches, especially to fix CVEs, that break assumptions you made in your consumption of the package

- supply chain attacks

- sunsetting

There’s no real “set it and forget it” with software reuse. For that matter, there’s no “set it and forget it” in civil engineering either, it also requires monitoring and maintenance.


I have talked to colleagues who wrote software running on microcontrollers a decade ago, that software still runs fine. So yes there is set and forget software. And it is all around us, mostly in microcontrollers. But microcontrollers far outnumber classical computers (trivially: each classical computer or phone contain many microcontrollers such as SSD controllers, power management, wifi, ethernet, cellular,... And then you can add appliances, cars etc to that).

If something in software works and isn't internet connected it really is set and forget. And far too many things are being connected needlessly these days. I don't need or want an online washing machine or car.


True, using a library in a cheap coffee maker you can maybe set it and forget it. I have an old TI-85 calculator that’s never needed to update its OS, while Apple has obsoleted multiple generations of applications in its never ending upgrade cycle.

But for mission critical applications the bar is a little higher. Isn’t this why we have the ongoing dialogue about OTA updates for Teslas etc and the pros and cons of that approach? Because if you can’t OTA patch a bug, you have to issue a recall [0]. But if you have internet connectivity, as you rightly point out, then you have a whole new attack surface to consider.

I just don’t think it’s all that simple.

[0]: https://www.cbsnews.com/amp/news/ford-recall-lincoln-explore...


Indeed it isn't easy, but for car software, why couldn't you do the software upgrade offline, while at the mechanic, or via a USB drive with a signed installer, or via a phone app plugged into a USB port in the car? For a basic car there really isn't a need to be always online.

My car just has a bluetooth stereo, and it isn't very old. Yeah it is a basic model, but I really don't need or want connectivity in it. The one argument I could see would be showing maps, but I need offline maps anyways since I often lack any sort of mobile phone connection where I'm going. And you can update maps on a monthly basis (mobile phone app over USB while parked at home would work perfectly for this). Currently I just run OsmAnd on my phone with openstreetmap data downloaded in advance. Realtime traffic information perhaps could be an argument, but again, better to distribute that via FM radio that has better coverage (or even AM radio in some parts of US as I understand it).

And cars might be the odd one out. There really is no excuse for exposing washing machines and other applicances online. Especially since they are likely to last for a lot longer than the software will be supported. The fridge and freezer at my parents is around 20 years at this point for example. My washing machine is over 10 and going strong. I doubt they would get software security support for that long.


Ignoring the actual useful reasons to connect something to be internet, the subscription business model is just too damn tempting.

The same scenario played out in Vietnam. The US could never succeed because:

- the enemy was intermingled with the "friendly" civilians, and they couldn't be told apart, leading to everyone being treated brutally and potential friends becoming enemies

- the enemy was prepared to fight to the death, for years if need be, and knew they could outlast US public opinion

- the enemy knew they could prevail because of centuries of history defeating much larger opponents (in Vietnam's case, of them previously defeating France and China).

All of these same conditions would be present in a ground war in Iran, with some religious fanaticism thrown in on top.


Don't forget:

- the enemy had plenty of material, technical and financial support from adversarial superpowers who were all too happy to see American lives, money and military resources wasted.

That external support is not fully scaled up yet (despite clear reports of Russian intelligence support for Iran), but you can bet it would be in the event of a major ground assault, occupation, and/or counter-insurgency quagmire.


> the enemy had plenty of material, technical and financial support from adversarial superpower

Vietcong weren't exactly fighting with 'plenty of material'. They used weapons from second world war, sometimes first world war, cheap Chinese crap..

Are you comparing that to Americans aircraft, bombs, helicopters ? It was as asymmetrical as it would be against Iran.


I think you are referring to:

"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)


I wonder if at we are standing looking at the smoking field of programming languages created over the last 50 years and gazing at the final survivors, of which Java is definitely one.

Why would anyone create a new language now? The existing ones are "good enough", and without a body of examples for LLMs to train on, a new language has little chance getting traction.

I learned IBM /360 assembler when I started in computers a long time ago. I haven't seen a line of assembler in many decades, but I'm sure it's a viable language still if you need it.

Java has won (alongside many other winners of course), now the AI drawbridge is being raised to stop new entrants and my pick is that Java will still be here in 50 years time, it's just no humans will be creating it.


> Why would anyone create a new language now?

I'm writing my own programming language right now... which is for an intensely narrow use case, I'm building a testbed for comparing floating-point implementations without messy language semantics getting in the way.

There's lots of reasons to write your own programming language, especially since if you don't care about it actually displacing existing languages.


Most of my 20 years of experience is Java. Now 3 years into a new job mostly using Python to build microservices. I feel much more productive using Python (plus uv, ruff and mypy for fast repeatable package management, linting and type checking). I see Python having a trajectory to keep improving and gaining more adoption - eg Python keeps growing in popularity https://survey.stackoverflow.co/2025/technology/. It will have real threads soon and better type checking with ty (thanks to astral who also make uv and ruff). Python gives an incredibly tight integration loop since you don’t wait on compiling. Our kotlin projects are always at least 2x slower for ci/cd builds. I like typescript, but working with it in IntelliJ is incredibly slow.

Java compiles as fast as go, so not really sure what's the problem - you can basically use it in an "integration loop" with no problem.

Python's great until you have to refactor a large code base

> Why would anyone create a new language now? The existing ones are "good enough", and without a body of examples for LLMs to train on, a new language has little chance getting traction.

Compiler writing can be an art form and not all art is for mass consumption.

> Java has won (alongside many other winners of course), now the AI drawbridge is being raised to stop new entrants and my pick is that Java will still be here in 50 years time, it's just no humans will be creating it.

This makes no sense to me. If AI possesses intelligence then it should have no problem learning how to use a new language. If it doesn't possess intelligence, we shouldn't be outsourcing all of our programming to it.


Intelligence is not a well understood concept. "AI" is also not a well understood concept - we have LLMs that can pick up some novel patterns on "first sight", but then that pattern takes up space in the context window and this kind of learning is quite limited.

Training LLMs on the other hand requires a large amount of training data.


> This makes no sense to me. If AI possesses intelligence then it should have no problem learning how to use a new language. If it doesn't possess intelligence, we shouldn't be outsourcing all of our programming to it.

Perfection. You have made such an excellent. However, I don't want to detract from that but it's like, in reality, this is a completely obvious point but because of this AI/LLM brain-rot that has taken over the software programmer community, writ large, this is particularly insightful. It's also just a sad and unimaginative state we are in to think that no more programming languages will ever be needed other than what currently exists in March 2026 because of LLMs.


> this is a completely obvious point but because of this AI/LLM brain-rot that has taken over the software programmer community

This is really insulting. I think you're wrong, AI agent programming is very good now, and you will have to admit it at some point.


Insulting to whom? Are you anthropomorphizing LLMs. Also, you think it is insulting it is a bad idea to complete outsource software development to it. If that’s the case then being insulting sounds like a good thing.

>Insulting to whom? Are you anthropomorphizing LLMs.

Obviously not. Insulting to programmers who use them, hence I object to the term "brain-rot". Clearly the insinuation is we're just suffering from brain rot.


What advantage do old languages have that can’t be overcome or at least reduced to insignificance?

The 50-year head start in training data, runtime, and ecosystem? That may not be much, because LLMs are rapidly accelerating software development. LLMs can also generalize: take what they learned for one language and apply it to a “similar” language (and I think most modern languages are similar enough for broad effective translation: all have records/unions, objects, functions, types, control-flow, exceptions, and more). Some fairly recent languages (e.g. Rust, Swift) already have comparable runtimes and ecosystems to older ones, from human-driven development acceleration and concept generalization.

In a recent study, LLMs solved the most exercises in Elixir (https://autocodebench.github.io Table 4). Anecdotally, I’ve heard others say that LLMs code best in Rust, and (for UI) Swift. More importantly, I don’t see an older language advantage that is widening from LLM use; an older language probably is better for most use cases today, but any written code can be translated or regenerated into a newer one.


> I don’t see an older language advantage that is widening from LLM use

A classic one is C++. Microcontrollers like esp32 cost about $10 these days, for a machine more capable than an early PC.

One downside though is that you typically need C++ to program them, and the barrier to entry with C++ is very high, especially for non-programmers.

LLMs remove that barrier so that anyone can create powerful embedded devices - programmed in C++ - without knowing anything about C++.


Programmers would create a new language when there is a fundamental change in hardware architecture such that the assumptions underlying the old programming languages no longer apply. Java is probably a poor choice for writing software in which most computation happens on GPUs. But I agree that someone (or something) will still be using Java to write new line-of-business applications in 50 years.

Mostly a tongue in cheek reply, but you might be interested in TornadoVM.

(Nonetheless, I agree with you)


> Why would anyone create a new language now?

Same reason you'd ever create a new language — to start anew, breaking free from the shackles of backwards compatibility of the old language, having learned the lessons of the past.

The AI angle makes even less sense — surely we will want to create languages that are tailored for them.


Why would anyone play chess in 2010? The drawbridge is rapidly being raised on human competitiveness.

The vast majority of programming languages ever created never aspired to win and I don't think that's going to change now.


Only if they leave a door open, which they did here.

If your argument is that you can't hope to close every door, then AI will make it easier to close all the doors in the future.


>then AI will make it easier to close all the doors in the future.

AI could also make it easier to open the doors too.


It does when your phone number is used for 2fa in a session running on tcp/ip


This seems self-correcting. Every lawyer, and maybe court, will use AI to review the other party's filings for such things. AI overseeing what is true and what is not - nothing disturbing about that distopian future.


This sounds innately wrong. When we think of celebrity clients traveling but skipping any identity checks because their entourage can vouch for them and don't want to hassle them - then who's to say later whether that person did or did not travel to that island or authorize that money transfer?

Instead, this should be handled not by fudging identity verification but by skipping it and maybe tagging the skip event with some verified identities of the people authorizing the skip.


> and maybe tagging the skip event with some verified identities of the people authorizing the skip

This. Left unchecked, an entourage around a fake "celebrity" can get pretty far.


Great instincts! It would be less the entourage and more an accredited travel agency with established reputation. And absolutely correct that the skip should be auditable and intentional - and having support at the provider level for this makes this more auditable, not less.


They stand out as great examples of commoditising your complement.

When your business is pushing ads to people while they watch cat videos, then video processing software is your complement, and you want it to be as cheap as possible.

[0] https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: