Hacker News .hnnew | past | comments | ask | show | jobs | submit | nnennahacks's commentslogin

Yeah, the "$15-20 a PR is cheaper than a great engineer" idea is doing a lot of hand‑waving here...

If you're a big shop pushing, say, 2,000 PRs a week and reviews average $15–25, that’s on the order of $30k–$50k a week in AI review spend, or $1.5-2.5M a year. That is quite a line item to justify.

"It's $20 cheaper than a senior engineer’s hourly rate,"... so what are you actually doing with your human reviewers once you add this on?

If you keep your existing review culture and just bolt this on, then you've effectively said "we’re willing to add $1–2M+ a year to the budget." That might be fine, but then you should be able to point to fewer incidents, shorter lead times, higher coverage, something like that.

Either this is a replacement story (fewer humans, different risk profile) or it's an augmentation story (same humans, bigger bill, hopefully better outcomes). "It’s cheaper than a great engineer" by itself skips over the fact that, at scale, you’re stacking this cost on top of the engineers you already have in the org.


I’ve been working with Generative AI systems across startups and enterprise teams, and I keep seeing the same architectural gaps: missing patterns around retries, observability, workflows, and safety.

This blog outlines 8 major areas I think we haven’t solved yet: token-level tracing, agentic idempotency, AI state snapshotting, embedding versioning, etc.

Curious if others building in this space are seeing the same gaps or have already found good solutions.

Would love your feedback and discussion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: