Hacker News .hnnew | past | comments | ask | show | jobs | submit | tarsinge's commentslogin

I dont understand why software engineers insist on keeping the craftsmanship aspect of writing code. Compare to other engineering disciplines, like civil engineering. Engineering was never about going in the field yourself to build things with your own hands. You can become a great civil engineer without building bridges that fail yourself. To me it doesn’t matter that the thing I design is built with a crane or AI. I can design quality control processes too to ensure the thing is built up to standard, I don’t have to build the thing myself to be sure. There is nothing wrong with artisanal code crafting, I appreciate this too, but professionally that’s not engineering. It seems AI is just forcing us to clear the confusion the hard way.

There's a false equivalency between software engineering and civil engineering here, in my opinion. I would argue that the craftsmanship SWEs see in their work stems from a necessity to be novel in order to truly make something worth putting out into the market. "Oh, you're making an app that tracks heartrate/makes music/provides driving directions? Why wouldn't the user just use <insert 'X' market-leading app>?" There's no real merit to making clones, whereas in civil engineering (I would argue) this is the bread and butter. You can't copy and paste a bridge. There's a physicality to it that says "okay, make another bridge similar to this but now for that gap", so the challenge becomes making the necessary repetition more efficient, and it's "fine" if no one is going out of their way to be an "artisanal civil engineer".

Combine this argument with the fact that LLMs are reliant on what information they've ingested; they'll only give you responses based on what already exists. The creativity needed to make something (worth making) is missing there. You'd hope that the humans using the AI fill that role, but comments like this one and others lauding praises on AI and vibe-coding give me serious doubt. We could argue instead that SWE is a misnomer for this field, but that's a separate conversation.

In my opinion, SWE should prioritize true innovation, which AI isn't designed for. (IMO, AI is better suited for fast info lookup rather than key production tasks) Without creativity in SWE, the industry bloats to a unsustainable mass of cloned/useless apps and startups just hoping to be eaten (bought) by a bigger fish, with investors/customers repeatedly being promised "something better is right around the corner!" ...and it just never comes, and the whole thing just collapses on itself.


> I would argue that the craftsmanship SWEs see in their work stems from a necessity to be novel in order to truly make something worth putting out into the market. ... There's no real merit to making clones, whereas in civil engineering (I would argue) this is the bread and butter. You can't copy and paste a bridge. There's a physicality to it that says "okay, make another bridge similar to this but now for that gap", so the challenge becomes making the necessary repetition more efficient, and it's "fine" if no one is going out of their way to be an "artisanal civil engineer".

This is a key insight that invalidates a lot of the manufacturing thought that infects software development. Manufacturing (in large part) is about making copies, better and cheaper. But with software, you can create perfect copies for free. A "software factory" makes no sense, there's a fundamental paradigm mismatch.


Yep most comparisons are DOA.

If the two objects don’t possess the same fundamental characteristics they cannot be said to be comparable.. and if such fundamental differences exists you have to control for them.


I guess we are simply not doing the same job. I’m customer facing, and the bread and butter of my job is to create software for the customer and client’s needs and constraints using proven tools and solutions, exactly like in civil engineering. Sometimes there are projects that require innovation, and yes some can specialize in innovative projects, but that’s not the majority of projects. In fact in my study in university the specialization between software, electrical, civil, etc. happens late in the cursus.

Software is blueprint. Can you really become a decent civil engineer if you never created a buleprint for anything?

Will large scale construction projects ever be started with AI made blueprints?

There's probably more to the whole engineering discipline, soft- and hardware, than you give it credit for here.


The comparison isn't valid because nearly all software "engineers" don't do anything resembling engineering; they are mere coders. A better analogy would be to assembly line workers: coders glue together packages and libraries and frameworks pre-made for them by other, actual software engineers. The craftsmanship is one of the few things coders could bring to the table.

Because many aren't software engineers, they are brick layers.

To be comparable, they would have to go through the same university degree and professional certification, instead of doing a JavaScript training and call themselves software engineers instead of coders.

They are getting the blueprints from architects and senior devs, and putting those bricks into place, and carrying buckets.


Thanks I think that explains part of the misunderstanding: I’m from a country where engineer is a regulated title, you can’t call yourself a software engineer if you indeed didn’t go through a comparable certification to a civil engineer.

In the FAANG land of move fast and break things any title goes, and they use the commercial success to justify it works.

A colleague once told me, the difference between software engineering and civil engineering is that they build the same bridge repeatedly while we never build the same thing twice.

bc software engineering learning is 99% BOTTOM-UP...

and that's bc SE education FAILED BADLY... almost nothing of what's useful is thought in schools and nothing of what's thought is useful

instead of FIXING education and theory, software engineering marched on forcefully without it

now we need to go back and properly fix education, because an intern should absolutely be required to have the "advanced" skills that we imagine in our deluded minds that only "10+ ys of industry experience" should confer, and that are absolutely required to be even a junior AI-augmented SE

SE/CS education should be rethought from scratch to distill, purify, and teach in 3ys max the concepts that used to be acquired through 10-30ys of experience - it 100% CAN be done, and we should wake tf up and DO IT instead of complaining about it - "advanced enterprise systems" architecture require nothing more than mid-highschool math and can be thought on symulated systems in sem 1 of year 1, it's just some of the "teachers" would have to actually put in the 80hrs-weeks of work to do it in due time


Yes, let's build a bridge with AI

I think the analogy (and it is not to be taken literally) is that of "commoditized processes".

Nowadays we don't build bridges to suit the site, we choose sites to accommodate bridges that we basically build identically via a few designs.

Connecting back to s/w AI can do the standard stuff ok as long as you test around the outside of it, so you might want to hone your judgment about how you build systems so it uses the stuff AI can do well, vs "building for the site". The gains are productivity. The losses are efficiency (the problem must go through some extra steps to meet the process where it works). Same as any engineering problem at scale.


I'm not in the field but I think it's because historically neural nets were looked down and deemed unpromising because they lacked understanding, compared to Symbolic AI or SVM for example. Since the Deep Learning revolution, which is engineering driven, the trend has inverted, research to understand and theory are seen as the things that hindered progress with neural nets in the past.

Part of the issue with neural nets is that historically they were next to impossible to train. ADAM, BatchNorm/LayerNorm, initialization schemes, and GPUs for pure speed really helped to change all of that.

But that's not how "mysterious" is used here. These scientists did not meet their end during an obvious outdoor activity.

One of them disappeared while hiking with friends. Another two were last seen walking away from home.

Roughly half the people you'd see walking are "walking away from home". It's not a known risk factor. In fact unless they live near "nature" then being seen walking anywhere at all near their home is pretty reasonable evidence that their disappearance, whatever the cause, is less likely to be "Got lost hiking" or similar.

Okay but one of them literally got lost hiking. Two, if you count the cancer researcher that a lot of people online seem to be bundling in for some reason.

To me it was already quite intuitive, we are not really managing the psychological state: at its core a LLM try to make the concatenation of your input + its generated output the more similar it can with what it has been trained on. I think it’s quite rare in the LLMs training set to have examples of well thought professional solution in a hackish and urgency context.


No, that's how base model pretraining works. Claude's behavior is more based on its constitution and RLVR feedback, because that's the most recent thing that happened to it.


How can you still can't distinguish between using LLMs as tools and a non technical person vibe coding? I have yet to run into any serious software engineer that had to dive into a legacy codebase or an unknown tech stack and found no value in e.g. Claude Code for general understanding and refactoring. Not even talking about coding, just the capacity of generating custom contextualised documentation and examples tailored to your constraints and skills on the fly is ridiculously helpful.


The tool being useful sometimes does not support the statement that we are "past the point" of not using LLMs.


> We are SO past the point of software being developed without LLMs at _all_

That's exactly why I've given up on programming, development or career subreddits. There are a lot of interesting software engineering challenges opening up, but instead of discussing it like professionals it all gets drowned in a big negative mixture of rants against the financial AI bubble, companies using AI as an excuse to lay off, and a general antiwork vibe. All these subreddits have become feel good/bad echo chambers for angry teens and students with no real world professional experience.


So you really don't understand why people with real professional experience might be anxious now, and why there is an antiwork vibe? It's not just junior devs.


I understand why they might be anxious, but my point is it’s unrelated to the technology itself. Imagine people denying the internet works in 2000 because of the Dotcom bubble. Same with layoffs, they are not really due to AI (https://en.wikipedia.org/wiki/2026_United_States_corporate_m...), it’s a political discussion. And the antiwork vibe is not new, I have strong political convictions on how we should more equally redistribute capital gains so that even if AI was able to replace software engineers it would not be an issue but again that is political.

LLMs tooling brings a lot to senior devs. I have 15 YOE, I own a small agency, we are shipping faster, with less bugs, and believe it or not we are hiring, because we are able to take more work and grow, as it is logical without the political issues plaguing the US in particular. The market is already adjusting, hence why to me we are way past the point of developing professionally without LLMs.

So no I don’t get why the political topics are not discussed elsewhere and the irrational denial of the technology because of said political issues.


I’m not sure because in many modern open world games you are just like a Uber driver following GPS from checkpoint to checkpoint. It would with old school games that relied on memorizing the world and had minimal or even no map indications.


I feel like a dark souls game has a similar learning pattern where you need to memorize the move sets of bosses. It taps into that dynamic pattern recognition that traffic would cause for taxi drivers

DayZ is another one because there is no in-game GPS. You have to use maps and compasses to figure your route and many people can spot an exact area on the massive map by a picture of a bush


The reasoning is by being polite the LLM is more likely to stay on a professional path: at its core a LLM try to make your prompt coherent with its training set, and a polite prompt + its answer will score higher (gives better result) than a prompt that is out of place with the answer. I understand to some people it could feel like anthropomorphising and could turn them off but to me it's purely about engineering.

Edit: wording


> The reasoning is by being polite the LLM is more likely to stay on a professional path

So no evidence.


> If the result of your prompt + its answer it's more likely to score higher i.e. gives better result that a prompt that feels out of place with the answer

Sure seems like this could be the case with the structure of the prompt, but what about capitalizing the first letter of sentence, or adding commas, tag questions etc? They seem like semantics that will not play any role at the end


Writing is what gives my thinking structure. Sloppy writing feels to me like sloppy thinking. My fingers capitalize the first letter of words, proper nouns and adjectives, and add punctuation without me consciously asking them to do so.


Why wouldn't capitalization, commas, etc do well?

These are text completion engines.

Punctuation and capitalization is found in polite discussion and textbooks, and so you'd expect those tokens to ever so slightly push the model in that direction.

Lack of capitalization pushes towards text messages and irc perhaps.

We cannot reason about these things in the same way we can reason about using search engines, these things are truly ridiculous black boxes.


> Lack of capitalization pushes towards text messages and irc perhaps.

Might very well be the case, I wonder if there's some actual research on this by people that have some access to the the internals of these black boxes.


That's orthography, not semantics, but it's still part of the professional style steering the model on the "professional path" as GP put it.


For me it is just a good habit that I want to keep.


I remember studies that showed that being mean with the LLM got better answers, but by the other hand I also remember an study showing that maximizing bug-related parameters ended up with meaner/malignant LLMs.


Surely this could depend on the model, and I'm only hypothesizing here, but being mean (or just having a dry tone) might equal a "cut the glazing" implicit instruction to the model, which would help I guess.


Uber and Airbnb have network effects. You cant increase price when there is no cost in switching.


I dont see how network effects applies to Uber/Airbnb because nothing stops drivers/hosts from listing their property in multiple such apps


People continue using Airbnb because that's where the properties are listed. And owners keep listing properties because that's where the users are.


My point was that nothing stops hosts from listing their properties in AirBnb as well as a competitor. Unless AirBnb penalizes delisting or enforces price parity I guess?


Do you understand network effects? It’s not hand cuffs. I can also sell my rare baseball cards outside of ebay. But…


When the incumbent shoot themselves in the foot. Google and Microsoft are consultancy driven bureaucracies with abysmal product culture. At best Google will be the one providing the back end, but it’s very unlikely to me they’ll win the end user product space.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: