Hacker News .hnnew | past | comments | ask | show | jobs | submit | gombosg's commentslogin

I started my career as a machine designer (mechanical engineering), designing some machines for FMCG factories.

It wasn't that much different from SWE - mostly looking up catalogs, connecting certain pre-made pieces together with custom parts and lots of testing of the final plan to make sure there are no collisions and every movement is constrained properly.

95% of the time no load or sizing calculations were necessary - we just oversized everything based on tacit knowledge (the greybeards reviewing the plans) since these machines were not mass produced and choosing somewhat bigger parts was not expensive given that these machines would operate and produce value 24/7 for years.

(I hope the analogy to software engineering is visible!)

What I'm saying is that the level of "engineering rigor" heavily depends on the field where engineers are operating within. Even certain SWE fields (healthcare, finance, aviation etc.) have more regulation and require more rigor than others.


Right at the top: "That distinction matters more than people think." That's basically telltale AI :)

Also the entire framing around "judgment" and "taste" is what LLMs love to parrot about the topic.

There are fair arguments in the post but I totally agree that "writing is thinking" and also holding myself to "if you didn't bother to write it, why would I bother to read it"?


One of the many things that has been strange to me is how often people will label written thoughts as AI slop when the "signs" are just normal phrases. Sure, that's a tired expression, and I 100% agree we should be critical of writing that seems to embolden pointless trite expressions. But people have written in that way for years before LLMs.

I find it very interesting that we only now have more widespread discourse around the quality of prose and rhetoric now that LLMs have become ubiquitous.


I'm sorry for your experience, but loved the painting at the end... :)


The completely unrelated painting ;)


I think you're right, ephemeral code would be the concept that you have (I'm hand-waving) "the spec", that specifies what the code should be doing and the AI could regenerate the code any time based on it.

I'm also baffled by this concept and fundamentally believe that code _should be_ the ground truth (the spec), hence it should be human readable. That's what "clean code" would be about, choosing tools and abstractions so that code is consumable for humans and easy to reason about, debug and extend.

If we let go of that and rely on LLMs entirely... not sure where that would land, since computers ultimately execute the code - and the company is liable for the results of that code being executed -, not the plain language "specs".


Exactly, basically then every desk or office job means sitting next to a box?


I still don't get it.

If AI really improves efficiency and allows the company's employees to produce more, better products faster and thus increase the competitiveness of a company... then why does said company fire (half of!) its staff instead of, well, producing more, better products faster, thus increasing its competitiveness?

Am I naive or is AI a lie when marked as a cause?

Why is it that us employees are gaslighted with the FOMO of "if you don't adopt AI to produce more, then you'll be replaced by employees who do", and why do these executives don't feel "if you fire half of your employees for whatever reason, you'll be outcompeted by companies who... simply didn't?"


If you have good ideas that have a nice return on investment and leverage existing skills, sure. If you don’t have good opportunity laying around, best for the business to switch to maintenance mode, which means cutting staff. Or maybe cut staff, then use equity to buy growth via acquisition. It really depends on the business. Block’s growth has slowed so perhaps this would have happened anyway and AI is just what’s getting the blame.


Let's say AI increases productivity per capita by 50%

That means 50% of current headcount now has the same productivity as 100%

Now we calculate:

A = OPEX costs cuts by firing 50% of personal

B = Profit increase by the AI 50% productivity increase while not firing anyone

if A>B, reduce headcount

if B>A, reduce headcount and then increase workload on remaining employees until profits increase


> instead of, well, producing more, better products faster, thus increasing its competitiveness?

Probably because this is not Block's business strategy. If they could do this, then they would...


It may be just bullshit, it probably is. But it also can be that the market demand for software is not as infinite as we thought it is in SaaS era.


Love your approach and that you actually have "before vs. after" numbers to back it up!

I personally also use AI in a similar way, strongly guiding it instead of vibe-coding. It reduces frustration because it surely "types" faster and better than me, including figuring out some syntax nuances.

But often I jump in and do some parts by myself. Either "starting" something (creating a directory, file, method etc.) to let the LLM fill in the "boring" parts, or "finishing" something by me filling in the "important" parts (like business logic etc.).

I think it's way easier to retain authorship and codebase understanding this way, and it's more fun as well (for me).

But in the industry right now there is a heavy push for "vibe coding".


That makes a lot of sense. Staying hands on is key.


I think there are four fundamental issues here for us...

1. There are actually less software jobs out there, with huge layoffs still going on, so software engineering as a profession doesn't seem to profit from AI.

2. The remaining engineers are expected by their employers to ship more. Even if they can manage that using AI, there will be higher pressure and higher stress on them, which makes their work less fulfilling, more prone to burnout etc.

3. Tied to the previous - this increases workism, measuring people, engineers by some output benchmark alone, treating them more like factory workers instead of expert, free-thinking individuals (often with higher education degrees). Which again degrades this profession as a whole.

3. Measuring developer productivity hasn't really been cracked before either, and still after AI, there is not a lot of real data proving that these tools actually make us more productive, whatever that may be. There is only anecdotal evidence: I did this in X time, when it would have taken me otherwise Y time - but at the same time it's well known that estimating software delivery timelines is next to impossible, meaning, the estimation of "Y" is probably flawed.

So a lot of things going on apart from "the world will surely need more software".


I don't see how anything you're saying is a response to what I said.


I love using LLMs as well as rubber ducks - what does this piece of code do? How would you do X with Y? etc.

The problem is that this spec-driven philosophy (or hype, or mirage...) would lead to code being entirely deprecated, at least according to its proponents. They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.


>They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.

As long as "they" are people that either profit from FOMO or bad developers that still don't produce better software than before, I'm ok ignoring the noise.


I can totally relate to your experience.

I started this career because I liked writing code. I no longer write a lot of code as a lead, but I use writing code to learn, to gain a deeper understanding of the problem domain etc. I'm not the type who wants to write specs for every method and service but rather explore and discover and draft and refactor by... well, coding. I'm amazed at creating and reading beautiful, stylish, working code that tells a story.

If that's taken away, I'm not sure how I could retain my interest in this profession. Maybe I'll need to find something else, but after almost a decade this will be a hard shift.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: