Hacker News .hnnew | past | comments | ask | show | jobs | submit | funnyfoobar's commentslogin

I have been using AI workflows at work to increase the productvity. I have shared these workflows internally and at a couple of tech meetups I went to. I got positive response.

Some of these are present here: https://github.com/vamsipavanmahesh/claude-skills/

Planning to package this as a workshop, so companies could be benefit from AI Native SDLC.

Put together the site yesterday https://getainative.com

Couple of the people I have worked with in the past agreed to meet me for a coffee, will pitch this. Fingers crossed.


Original author here: These skill have been adding a lot of value in very less time. After debugging a knarly bug for 4 hours, you might be exhuasted to explain in it a good way, this helps with that.


I have replied to a comment adjacent to yours, for my parent comment. Please check i think if you follow the "process", it is avoidable.


What you are saying may have made sense at the start of 2025 where people were still using github copilot tab auto completes(atleast I did) and was just toying with things like cursor, but unsure.

Things have changed drastically now, engineers with these tools(like claude code) have become unstoppable.

Atleast for me, I have been able to contribute to the codebases i was unfamiliar with, even with different tech stacks. No, I am not talking about generating ai slop, but I have been enabled to write principal engineer level code unlike before.

So i don't agree with the above statement, it's actually generating real value and I have become valuable because of the tools available to me.


> Things have changed drastically now, engineers with these tools(like claude code) have become unstoppable

I’ve spent the last week unwinding my coworkers slop who said the same thing.


To be honest, what you have described only shows lack of process.

I am not talking about vibe coding here, which is totally different and understandable. But I am talking about the professional context where the structure is already in place.

Here is my workflow, if it helps:

1. I have a system level prompt i execute(command/skill) before I start anything which says things like

    i) follow SOLID principles
    ii) follow TDD
    iii) follow these testing principles like, don't test the implementation details etc
    and 20 other things
2. I give it the context of what I need to accomplish, and toggle the plan mode ask it follow the above things I have set at system level, and generate me a document to review step by step

3. I review everything, add feedback to the generated plan, prompt again, and finally finalize the plan

4. now that the plan is finalized, I would have a small alignment with co-workers and ask it implement step by step, and then before proceeding with next step, i will tell it to ask me for feedback

each steps code is reviewed and commited, before proceeding to next step

5. once all steps are done, manual testing is done, i do a local review

6. I have a specific skill which reviews everything, and gives me feedback

7. we have human reviewers who will review the code, and coderabbit

8. then we take it to the staging and uat before taking to prod

so see we have not skipped any process of "software engineering", claude code is just an accelrant

but if you are not doing most of the above steps, i see what you are saying would happen by default


I was not expecting a couple of new apps being built, when the premise of the blog post talks about replacing "mid level engineers"

the thing about being an engineer at commercial capacity is "maintaining/enhancing an existing program/software system that has been developed over years by multiple people(including those who already left) and do it in a way that does not cause any outages/bugs/break existing functionality.

while the blog post mentions about the ability of using AI to generate new applications, but it does not talk about maintaining one over a longer period of time. for that, you would need real users, real constraints, and real feature requests which preferably pay you so you can priortize them.

I would love to see such blog posts where for example, a PM is able to add features for a period of one month without breaking the production, but it would be a very costly experiment.


Yeah you are right, I am underselling myself in terms of just watering it down to LOC. But I was mostly talking about tangible outcomes that are obvious.

> AI can write 15k lines of code, but it cannot take Liability for a single one.

Thanks for writing this, I needed it.


Glad it resonated.

The addiction to "visible output" (like LOC) is hard to break because it feels like work. But in the AI era, "Judgment" is the new labor.

Think of it like a traditional Japanese Hanko (seal). The value isn't in the paper or the ink (which are cheap/commodities), but in the authority of the stamp that guarantees the content.

Your "tangible result" is no longer the code itself, but the trust that comes from your seal of approval. Keep guarding.


Yes, there will be always someone who is needed to program stuff. Totally agree with that.

But my question is "how many of those will be needed", because I am not saying that programmers are not needed.

When less numbers are needed, there will be so much competition in finding those jobs, esentially would also mean not able to find the work, as there will be always someone who would be willing to the job at lower wage and come to work with more youthful energy.

Just speaking out loud.


Look up induced demand. As it gets easier, more software gets created, not less.


I've had a long career, and seen a number of systemic changes.

I've lived through two software "explosions" where minimal skills lead to large output. The first was web sites and the second was mobile.

Web sites are (even now) pretty easy. In the late 90's though, and early 2000's there was tremendous demand for web site creation. (Every business everywhere suddenly needed a web presence.) This lead to a massive surge in building-web-site training. No time for 3 year degree, barely time for 90 days of "click here, drag that".

So there was this huge percentage of "programmers" that had a very shallow skill set. When the bubble burst it was this group that bore the brunt.

Fast forward to 2007, and mobile apps become a "thing". Same pattern evolves, fast training, shallow understanding, apps do very little (most of the heavy lifting, if it exists at all, is on the backend.) Not a lot of time spent on UI or app flow etc.

This time around the work is also likely to be done offshore. Turns out simple skills can be taught anywhere, tiny programs can be built anywhere.

Worse, management typically didn't understand the importance of foundations like good database design, coherent code, forward thinking, maintainence etc. Programs are 10% creation, 90% maintainence (adding stuff, fixing stuff etc.) From a management point of view (and indeed from those swathes of shallow practioners) the only goal is "it works."

AI is this new (but really old) idea that shallowness is sufficient. And just like before it first replaces people who themselves have only shallow skills; who see "coding" as the goal of their job.

We are far from the end of this cycle, and who knows where it will go, but yes, those with shallow skills are likely to be first on the chopping block.

Those with better foundations (a better understanding of good and bad, perhaps with a deeper education, or deeper experience) and the ability to communicate that value to management are positioned well.

In other words, yes the demand for "lite" developers will implode. But at the same time demand for quality devs, who can tell good from bad (design, code, ui etc) goes up.

If you are a young graduate, you're going to be light on experience. If you're and older person, but had very shallow (or no) training you're easily replaced. If you think development is code, you're not gonna do well.

In truth development is not about code (and never has been). It's about all the processes that lead up to the code. Where possible (even at college level) try and focus on upskilling on "big picture" - understanding the needs of a business, the needs of the customer, the architecture and design that results in "good" or "bad".

AI is a tool. It's important to understand when it's doing good, but also when it's doing bad.


> AI is this new (but really old) idea that shallowness is sufficient.

That’s not the whole story and certainly not the core concern, which is more about developers who already have deep experience, using AI to multiply their output.


Spot on. History doesn't repeat, but it rhymes.

You've seen the "Dot-com" and "Mobile" cycles. This "AI cycle" feels faster, but the trap is the same: Mistaking Access for Mastery.

In Japanese martial arts, we have "Shuhari" (Obey, Digress, Separate). AI gives everyone a shortcut to the final stage ("Look, I made an app!"), skipping the painful "Obey" stage where you learn why things break.

As you said, when the bubble bursts, only those who understand the "Foundation" (database design, consistency) will remain standing. The tools change, but the physics of complexity do not.


I agree with the takes, but my only question would be.

If everyone is doing high level stuff like architecture and design, how many of "those people" will be really needed in the long term? My intuition is telling me the size of market needing number of engineers will shrink.


Of course it will shrink! Every industry ever has shrunk as tooling got better.

That said, we are a long way from "peak software". There is a lot of scope for new things, so there's room for a lot of high-level people.

And of course the vast majority of current juniors won't step up at all. Just like the web site devs of the early '00s went off to be estate agents or car salesmen or whatever. Those with shallow training are easily replaced.

The wheel will turn though, and those with a quality, deep, education focused on fundamentals (not job-training-in-xxx-language) are best placed to rise up.


Could you elaborate more? What would those said foundations and fundamentals be ?


In a nutshell, a lot more understanding of how computers work, and how that affects software design.

From theory like Order(n), 3rd Normal Form, P versus NP, Recursion, Logic (including bit logic) etc, to practical things like exploration of language (why languages are different, why that doesn't matter), how Operating Systems actually work (and what they do), how Networks work (their strengths and weaknesses and thus impact on software design) and so on.

Obviously I can't list a 4 year syllabus[1] here, and it would be different for each college. IME colleges don't teach programming past the first couple weeks, although it is the basis for assignments and evaluation for the next 4 years. (In the way that grade school doesn't teach writing after year 1, but you write a lot in the next 10 years.)

[1] All of this can be self taught. There's plenty of text books and materials online. But basically self-taught people learn programming, not theory, and lack the "path" of a formal syllabus.

Each school will of course have a different syllabus, and some will offer selective modules as well focusing on specific areas like graphics, compilers, databases etc.


Thank you for taking the time. This is quite standard CS degree syllabus, while quality and rigour of CS schools differ but I think any decent CS grad should know these.


That is the billion-dollar question.

My take: The market for "Coders" will shrink, but the market for "Problem Solvers who use Logic" will explode.

Think of "Scribes" (people who wrote letters for others) in the past. When literacy became universal, the job of "Scribe" vanished. But the amount of writing in the world increased billion-fold.

Engineering is becoming the new literacy. We won't be "Engineers" anymore; we will just be "People who build things." The title disappears, but the capability becomes universal.


The process you have described for Codex is scary to me personally.

it takes only one extra line of code in my world(finance) to have catastrophic consequences.

even though i am using these tools like claude/cursor, i make sure to review every small bit it generated to a level, where i ask it create a plan with steps, and then perform each step, ask me for feedback, only when i give approval/feedback, it either proceeds for the next step or iterate on previous step, and on top of that i manually test everything I send for PR.

because there is no value in just sending a PR vs sending a verified/tested PR

with that said, I am not sure how much of your code is getting checked in without supervision, as it's very difficult for people to review weeks worth of work at a time.

just my 2 cents


Heya, I’m the author of the post! To be clear I have AI write probably 95% of my code these days, but I review every line of code that AI writes to make sure it meets my high standards. The same rules I’ve always had still apply — to quote @simonw “your job is to deliver code you have proven to work”.

So while I’m enthusiastic about AI writing my code in the literal sense, it’s still my code to understand and maintain. If I can’t do that then I work with AI to understand what was written — and if I can’t then I’ll often give it another go with another approach altogether so I can generate something I can understand. (Most of the time working together to understand the code works better, because I love to learn and am always open to pushing my boundaries to grow — and this process can tuned well to self-directed learning.)

And to quote a recent audit: “this is probably one of the cleanest codebases I’ve ever audited.” I say that emphasize the fact that I care a lot about the code that goes into my codebase, and I’m not interested in building layers of unchecked AI slop for code that goes into my apps.


Thanks for the clarification.

personally it would be too difficult for me to understand large chunks of work, like in your case "a week's worth of code" at a time. just wondering how do you go about it?

second, how do you pass such large PR's to your co-workers?(if you have any)


So I will state upfront that my current experience is not the most common team dynamic because I'm an indie developer [^1]. But I've worked at many companies — as small as 2 and as large as Twitter — so I am very familiar with the variety of engineering processes.

I can share how I work with agentic systems, because I (and now others) have found it to be very effective. I still have the engineering-like experience of thinking deeply — I've gotten great results across codebases small and large — and almost everyone who I've run a workshop with has come back to me and said that this was a missing piece for them when they work with agentic systems.

I'm the kind of person I alluded to at the end of my blog post when I wrote "Some people couldn’t start coding until they had a checklist of everything they needed to do to solve a problem.", so this description will be representative of that.

1. I start a document in Craft [^2] whenever I think of a great feature, and keep adding to that doc over the next few months whenever I have a new idea. I try to turn that document into something cohesive — imagine something like a PRD without the formality.

2. Then when it comes time to build the feature, I will just sit and write out a prompt (with lots of pointers to source code and relevant screenshots) that considers everything that needs to be built. I'll write out our goals for the feature, how the client should work, how the server should behave, the expected user experience, and anything else that's relevant. That process is really clarifying because it unearths a whole bunch of meaningful context — and context is exactly what a large language model needs!

3. Last but not least I'll simply add something like "Please ask any clarifying questions you may have, or for any additional details that you may find helpful". That leads to questions which I spend anywhere from another 5 to 30 minutes on, which fills in the gaps that I hadn't even considered to consider. And sure that may take time, but now the model has *so many useful details* that most people never add to their context window.

4. Once you have that, the model can act much more surgically than the experience most people have with agentic systems. Since it's so surgical I can go do something else like work on my newsletter, my AI workshops, or even go for a walk. This is why I much prefer to work this way, as opposed to the hands-on process I described Claude Code users [often] preferring in the blog post. (Which as I mentioned there is perfectly fine, just not my cup of tea anymore.)

---

I'd still like to touch on working with people though. I do quite a bit of open source work and there I still follow what people would consider standard processes and best practices. If I'm doing a week's worth of work I still don't want to dump a whole ton of code in one commit, so I'll break everything down into very atomic commits that spell out exactly what I'm doing. I also write lots of documentation, update references, and add tests like a person should.

But there's also nothing to say you have to generate a week's worth of code in one go. It's important to remember that you're in control of how you work. It may be more fitting to define smaller tasks (which will take less time for each independent step) and work on them serially, which you can then hand off to your coworkers one by one.

Ultimately my message is that people still need to exercise their best judgment and think for themselves. AI doesn't change what we've come to accept as best practices, it automates and accelerates them. In fact, the models keep getting better the more they are trained on our best practices, so my assertion is that success using AI seems to correlate well with autonomy, creativity, and critical thinking skills.

Anyhow, long answer for a short question — but I hope it helps! And if there's anything unclear: please ask any clarifying questions you may have, or for any additional details that you may find helpful.

[^1]: https://plinky.app [^2]: https://craft.do


I don't want to come across as judgemental or anything, nor I did deep research on your background.

It appears like you became a CTO, because you co-founded the company, not because you rose to the rank.

If you were to join a different company with this approach you are taking, I doubt if you would even reach Staff level.


From your point of view, what's the approach taken by someone who rose to the rank? Is it mostly people and process management and less to do with tech?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: