Hacker News .hn (a.k.a HN2)new | past | comments | ask | show | jobs | submit | rednafi's commentslogin

Shhh...you're only supposed to unilaterally praise it to get along with your clueless leadership.

Soon, we'll start seeing Claude certs getting listed on LinkedIn alongside Coursera courses.

People with titles like

Giga Chad, MBA, CSS, CKAD, XXX, PQRS

are gonna love this.

In no time, HRs will start slapping “10 years of certified Claude Code experience required” on job listings.


_Open to Claude_ ;)

it’s crazy how you could easily lie about having 10 years experience because your results are not that much different from someone who has only used Claude Code for like a week.

I think the older AI users are even held back because they might be doing things that are not neccessary any more, like explaining basic things, like please don't bring in random dependencies but prefer the ones that are there already, or the classic think really strongly and make a plan, or try to use a prestigious register of the language in attempts to make it think harder.

Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do, where it originated, and looks into causes. Often times I come back to the chat and see a working explanation together with a fix.

Before I had to actually explain why I want it to change some implementation in some direction, otherwise it would refuse no I won't do that because abc. Nowadays I can just pass the raw instruction "please move this into its own function", etc, and it follows.

So yeah, a lot of these skills become outdated very quickly, the technology is changing so fast, and one needs to constantly revisit if what one had to do a couple of months earlier is still required, whether there's still the limits of the technology precisely there or further out.


The obvious solution is for Anthropic et al. to certify the skills of each user:

> “Good at explaining requirements, needs handholding to understand complex algorithms, picky with the wording of comments, slightly higher than average number of tokens per feature.”

I’m not saying this would be good at all, but the data (/insights) and the opportunity are clearly there.


At work we’ve had like 10 hours of “AI training”. Like training us to use AI. I obviously learned nothing

I hope it’s at least a little tricky, since Claude was released only 3 years ago. That said, I would not be surprised to see companies asking for 10 years experience, despite that inconvenient truth.

I’ve seen it play out multiple times, highlights precisely why a candidate should never withhold their application based on preference of years of experience with anything. They simply haven’t put much thought into those numbers.

If you work on 10 projects in parallel for a year using Claude code… you have the equivalent of 10 years of experience in 1 year.

No you would have ten projects finished. You would have less than a year of actually programming experience.

That's not how it works...

If you can add more people to finish a project faster, I can add more projects to get experience faster.

I actually prefer removing people

I use the enterprise plan for work and often burn ~150$ worth of tokens per day. I have noticed exhibiting similar behaviors here.

When you say nearly unlimited token, do you mean the 100 or 200$ subscription?


$200, over December it was doubled. I tried my best in between family time and friends to burn a hole in it. Never got near doing so.

I have the enterprise plan and get to use it for both work and some personal stuff.

I mainly use it for side projects and doing research for writing stuff on my blog.

I use Opus 4.6 with claude code 1M context and consistently use up 150-200$ worth of token per day. I wonder how do you manage to do anything with a 10$/mo program.


AI just lowered the cost of replication. Now you can replicate good or bad stuff but that doesn't automatically make AI the enabler of either.

I think a lighter version of literate programming, coupled with languages that have a small API surface but are heavy on convention, is going to thrive in this age of agentic programming.

A lighter API footprint probably also means a higher amount of boilerplate code, but these models love cranking out boilerplate.

I’ve been doing a lot more Go instead of dynamic languages like Python or TypeScript these days. Mostly because if agents are writing the program, they might as well write it in a language that’s fast enough. Fast compilation means agents can quickly iterate on a design, execute it, and loop back.

The Go ecosystem is heavy on style guides, design patterns, and canonical ways of doing things. Mostly because the language doesn’t prevent obvious footguns like nil pointer errors, subtle race conditions in concurrent code, or context cancellation issues. So people rely heavily on patterns, and agents are quite good at picking those up.

My version of literate programming is ensuring that each package has enough top-level docs and that all public APIs have good docstrings. I also point agents to read the Google Go style guide [1] each time before working on my codebase.This yields surprisingly good results most of the time.

[1] https://google.github.io/styleguide/go/


> The Go ecosystem is heavy on style guides, design patterns, and canonical ways of doing things.

Go was designed based on Rob Pike's contempt for his coworkers (https://hackernews.hn/item?id=16143918), so it seems suitable for LLMs.


Then it results in an absurd amount of duplication. I regularly encounter error strings like:

error:something happened:error:something happened


Yes, and that is desired.

Error: failed processing order: account history load failure: getUser error: context deadline exeeded


Your example shows an ideal case w/o repetition. If every layer just wraps error without inspecting, then there will be duplication in the error string.

I have never seen that. I have shipped multiple dozens of services at half a dozen companies. Big code bases. Large teams. Large volumes of calls and data. Complicated distributed systems.

I am unable to imagine a case where an error string repeated itself. On a loop, an error could repeat, but those show as a numerical count value or as separate logs.


This feels like manually written stacktraces

I’d find Error: failed processing order: context deadline exceeded just as useful and more concise.

Typically there is only one possible code path if you can identify both ends.


Not in my experience. Usually your call chain has forks. Usually the DoThing function will internally do 3 things and any one of those three things failed and you need a different error message to disambiguate. And four methods call DoThing. The 12 error paths need 12 uniquely rendered error messages. Some people say "that is just stack traces," and they are close. It is a concise stack trace with the exact context that focuses on your code under control.

If you have both the start of the call chain and the end of the call chain mapped you will get a different error response almost every time and it is usually more than enough, so say your chain is:

Do1:...Do10, which then DoX,DoY,DoZ and one of those last 3 failed.

Do you really need Do1 to Do10 to be annotated to know that DoY failed when called from Do1? I find:

Do1:DoZ failed for reason bar

Just as useful and a lot shorter than: Do1: failed:Do2:failed...Do9 failed:Do10:failed:DoZ failed for reason bar

It is effectively a stack trace stored in strings, why not just embed a proper stack trace to all your errors if that is what you want?

Your concern with having a stack trace of calls seems a hypothetical concern to me but perhaps we just work on different kinds of software. I think though you should allow that for some people annotating each error just isn't that useful, even if it is useful for you.


After a decade of writing go, I always wrap with the function name and no other content. For instance:

do c: edit b: create a: something happened

For functions called doC, editB, createA.

It’s like a stack trace and super easy to find the codepath something took.


I have a single wrap function that does this for all errors. The top level handler only prints the first two, but can print all if needed.

I have never had difficulty quickly finding the error given only the top two stack sites.

Any complaint about go boilerplate is flawed. The purpose and value is not in reducing code written, it is to make code easier to read and it achieves this goal better than any other language.

This value is compounding with coding agents.


In an HTTP server, top level means the handlers, is that so?

Yes I guess I do annotation in two places - initial error deep in libraries is annotated, this is passed back up to the initial handlers who log and respond and decide what to show users. Obviously that’s just a rule of thumb and doesn’t always apply.

Depends if it can be handled lower (with a retry or default data for example), if it can be it won’t be passed all the way up.

Generally though I haven’t personally found it useful to always annotate at every point in the call chain. So my default is not to annotate and if err return err.

What I like about errors instead of exceptions is they are boring and predictable and in the call signature so I wouldn’t want to lose that.


Author here. I absolutely hated writing this piece after shooting myself in the foot a thousand times.

Go's context ergonomics is kinda terrible and currently there's no way around it.


It was a great piece and I learned a lot, thanks for writing it. I hope you didn’t think that it was you I was disappointed with rather than the language designers :)

It’s ironic how context cancellation has the opposite problem as error handling.

With errors they force you to handle every error explicitly which results in people adding unnecessary contextual information: it can be tempting to keep adding layer upon layer of wrapping resulting in an unwieldy error string that’s practically a hand-rolled stacktrace.

With context cancellation OTOH you have to go out of your way to add contextual info at all, and even then it’s not as simple as just using the new machinery because as your piece demonstrates it doesn’t all work well together so you have to go even further out of your way and roll your own timeout-based cancellation. Absurd.


No worries. Your intent was clear. I don't mind the boilerplates if they were footgun free. Context requires you write a bunch of boilerplate where it's still really easy to make mistakes.

Basically one guy having a fit when people disagreed with him.

It would appear that person and OP are one in the same.

Damn. I missed that. But yeah OP didn't take it well when people poked hole into his proposed API.

But regardless of API ergonomics, I would love to have UUID v4 and v7 in the stdlib.


maybe the OP is trying but failing to drum up support for his unergonomic api proposal

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: