Hacker News .hnnew | past | comments | ask | show | jobs | submit | 0xpgm's commentslogin

> Either way, that won't change the ongoing layoffs while trying to pursue the AI dream from management point of view.

I think most companies doing layoffs are bloated to begin with, AI is just the scapegoat to do the layoffs.


I am aware of layoffs that are really caused by AI.

Translation and asset generation teams for enterprise CMS, whose role has now been taken by AI.

Likewise traditional backend development, that was already reduced via SaaS products, serverless, iPaaS low code/no code tooling, that now is further reduced via agents workflow tooling, doing orchestration via tools (serverless endpoints).


In short, it is simply a click-bait title.

And the goal of the article is to draw attention to their project.


> And the goal of the article is to draw attention to their project.

Additionally, they couldn't even bother to write their own blog post, so it's a little hard to take them seriously when they say they're going to write their own code...


It's the same thing every time.

> Claude (c) by Anthropic (R) is the best thing since sliced bread and I'm Lovin' It(tm)! Here's a breakdown of you too can live a code free life for 10 easy payments of $99.99 a month if you subscribe now!

> Step one in your journey to code free life: code the whole damn project and put it together yourself

It's so much fluff and baloney and every single article is identical. And every single one is just over the top praise of Claude that doesn't come off as remotely authentic. There's always mentions of Claude "one shotting"(tm) something.


Right now majority of beginners start programming with a high-level language, say Python or JavaScript - then for more advanced system-level tasks pickup C/C++/Rust/Zig etc.

If Mojo succeeds, it could be the one language spanning across those levels, while simplifying heterogeneous hardware programming.


But do companies really know how to use AI? I think most of it is experimentation - throwing things to the wall and seeing what sticks.

It's the practitioner who eventually figures out what really works. I see this the same way the agile movement emerged. It was initiated by people who were hands-on programmers and showed enough benefit at minimizing software waste before it took a life of its own and started getting peddled by people who didn't really understand the underlying principles.


> I think most of it is experimentation - throwing things to the wall and seeing what sticks.

This is true in macro, but I think we're specifically referring to LLM-generated /assisted code (vibe-coding). 'Getting something out the door' is not an necessarily in reference to an AI-infused product, just new code written by AI


However, curious programmers who develop in high level languages will dabble with assembly maybe for fun, and will be much better off for it than those who treat parts of the stack like a black box never to be opened.

Perhaps a viable approach might be to vibe code the translation tool itself and observe that for every input it gives the expected output. Then once the translation is done, the translation tool can be discarded.

This would require a robust test suite though.

One of the cases where vibe coding might actually be useful, writing a throwaway tool.


I see this dilemma with LLMs all of the time.

Should you use the LLM to do the thing directly, or use the LLM to implement a tool that does the thing?

I tend to reach for the latter, it’s easier to reason about.


Plus, if the LLM goes down (or gets "upgraded" to a model that does the translation differently/wrong), you still have the tool available locally.

> Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

Any examples how you see some engineers being left behind?


> Any examples how you see some engineers being left behind?

I don't know where you live, but around where I live in Denmark you'd fail for not using AI at a senior interview in a lot of places. Even places which aren't exactly AI fans use AI to some extend.

The biggest challenge we face right now is figuring out how you create developers who have enough experience to know how to use the AI tools in a critical manner. Especially because you're typically given agents for various taks, which are already configured to know how we want things to be written.


Around here on your southern neighbour, everyone is supposed to be doing AI and being evaluated by this, yet in many projects if clients don't sign off on the use of AI tools, there is no AI to use anyway.

Additionally there are the AI targets set by C suites based on what everyone is saying on TV, and what we can actually deliver based on the available data sets, integration points, and naturally those sign offs for data governance, and hallucinations guardrails.


I work for a fortune 50 that is heavily tech based.

If you can’t interview without immediately reaching for an LLM you are considered unfit to work here.


Around here C levels have AI adoption goals and are actively pushing it throughout organisations. Even when it doesn't exactly make sense.

> Everyone is jumping off the cliff

> If you don't jump off the cliff you're falling behind


I was just giving them an anecdotal example of what they were asking for. I think the answer is somewhere in the middle, but I'm not in a position to push any form of change on the C levels.

I've noticed that back in Europe everyone's in a panic mode, but that's because of the inferiority complex most people have vs both US and China. It's unwarranted.

Probably in cognitive surrender. I have one such colleague and he is driving me crazy. "Claude sad that ..."

I'm starting to notice how those who don't use AI end up having to hand tasks over to people who can get them done quicker.

It is anecdotal for sure, but it's a pattern that seems to be emerging around me that expectations of velocity increases, and those who don't use AI can't keep up.


Why is velocity the overriding goal?

Shit processes. I don't know what places most of those people work at that crap is being merged into production at insane pace. You would expect any serious piece of software would be important enough to have the code be reviewed by at least one human.

Kind of.... I don't know. To get placed such requirements from the top down and not fight back, just take it head on, not even maliciously, don't even oppose it on a technical basis, just be like "yeah, you've now gotta ship faster or you're left behind, so therefore LLMs must be the future!", no critical thought attached. Is this shit coming from experienced engineers?

Preposterous we're relying on "it's better because I feel like", "dudes who don't use it are falling behind at work", "they ask for it in job interviews".


I wonder, how should an AI company be accountable for non-deterministic nature of AI, which is a fundamental property of the said AI?

People have been drinking too much hopium they have lost touch with reality.

Everyone needs to properly understand these tools before they use them for anything serious.


At the very least, when an agent can delete a production database you should get an obvious warning whenever you enable it. Marketing wouldn't like it though.


> This kind of forgetting is normal

Just as shift in power and the rise and fall of nations is normal.


Yes. Again, this will eventually happen to every one, some way. Of course nations always want to prevent this; it's part of the job of the government. But there's always long tail of very low probability, very destructive threats. You can't possibly safeguard against them. In fact, trying to do so is a sure way to trigger a fall of your nation (or at least your government), by draining your economy dry due to paranoia.

The rational thing is to address a threat proportionally to it's expected damage and probability of occurrence. When war is unlikely, you scale down your defense production; when it becomes more likely, you ramp it up - paying cold-start cost is still much cheaper than paying for ongoing readiness. If your scaling down defense makes it more likely for you to be attacked - well, that's the job of your intelligence and defense departments to track. Nobody said it's a static system - it's a highly dynamic one, that's what makes geopolitics a hard thing.


For that matter, a lot of human civilization has been about identifying things that were normal and making them rare. "Normal" infant mortality of 40%, famines, floods, history being lost, etc.

Anyway, when it comes to "this is normal" I think we should take care to distinguish between interpretations of:

1. "This specific case should not have taken certain people by surprise."

2. "This is a manifestation of a broader phenomenon."

3. "This is natural and therefore cannot or should not be solved." [Naturalistic fallacy.]


In the specific case discussed in the article and comments, I'm advocating for another interpretation:

4a. "If a process is unlikely to be needed any time soon, shutting it down and then paying cold-start costs if and when it's needed again, is better than keeping it going and wasting resources better used elsewhere", and

4b. "There's an infinitely long tail of low-probability problems, and you can't possibly afford to maintain advance readiness for any of them".

Also on the overall sentiment:

4c. "Paying a cold-start cost isn't a penalty or sign of bad planning. It's just a cost."


Programmers in non-western countries may not be able to afford $100 per month on vibe coding.

They may keep taking the longer and harder route of a mixture of AI and hand coding.


The same applies to the south. It’s shocking to read tales of people spending hundreds of dollars monthly with coding agents, that’s wholly impossible for the vast majority of devs in South America, even 20 dollars is hard to justify for most households. By economic factors alone, I bet there are a lot more people learning the hard skills in places they can’t afford to be dependent on the tools.


They'll find a way. If it's not the chiptole bot, the enormous volume of low-effort AI implementations will provide a free token layer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: