> author/publisher reputation is a far better signal than looking for AI tells
Hardly seems mutually exclusive. Surely you should generally consider the reputation of someone who posts LLM-responses (without disclosing it) to be pretty low.
A lot of people don’t particularly want to waste time reading the LLM-responses to someone else’s unknown/unspecified prompts. Someone who would trick you in to that doesn’t have a lot of respect for their readers and is unlikely to post anything of value.
I think valuing the source of information over its quality is probably a mistake for most contexts. I’m also very skeptical of people’s ability to detect AI writing in general even though AI slop seems easy enough to identify. (Although lots of human slop looks pretty similar to me.)
Don’t get me wrong. I don’t want to read (for example) AI fiction because I know there’s no actual mind behind it (to the extent that I can ever know this).
But AI is going to get better and the only thing that’s going to even work going forward is to trust publishers and authors who give high value regardless of how integral LLMs are to the process.
It's simpler than ideology about government vs. private enterprise. These are purely transactional people, looking out for what can benefit themselves. It's just about grabbing things for personal gain.
Systems tend to change over time (and distributed nodes of a system don’t cut over all at once). So what was valid when you serialized it may not be valid when you deserialize it later.
This issue exists with the parsed case, too. If you're using a database to store data, then the lifecycle of that data is in question as soon as it's used outside of a transaction.
We know that external systems provide certain guarantees, and we rely on them and reason about them, but we unfortunately cannot shove all of our reasoning into the type system.
Indeed, under the hood, everything _is_ just a big blob that gets passed around and referenced, and the compiler is also just a system that enforces preconditions about that data.
Regarding the shift away from time spent on agriculture over the last century or so..
> That was a net benefit to the world, that we all don't have to work to eat.
I’m pretty sure most all of us are still working to have food to eat and shelter for ourselves and our families.
Also, while the on-going industrial and technological revolution has certainly brought benefits, it’s an open question as to whether it will turn out to be a net benefit. There’s a large-scale tragedy of the commons experiment playing out and it’s hard to say what the result will be.
> The process of writing code helps internalize the context and is easier for my brain to think deeply about it.
True, and you really do need to internalize the context to be a good software developer.
However, just because coding is how you're used to internalizing context doesn't mean it's the only good way to do it.
(I've always had a problem with people jumping into coding when they don't really understand what they are doing. I don't expect LLMs to change that, but the pernicious part of the old way is that the code -- much of it developed in ignorance -- became too entrenched/expensive to change in significant ways. Perhaps that part will change? Hopefully, anyway.)
It doesn't look like you addressed issues raised in the article. E.g., see the "my experiences interviewing candidates" section where we can see this isn't just a problem of the author's (just one example in one section of an article that covers various things).
I always wonder what the purpose of posting these generic, superficial defenses of a certain form of LLM-based coding is?
That was a different matter altogether. I agree though that I didn't touch on that.
My experience is different in that case, but it certainly depends on the type of technical challenge, the programming language, etc.
Candidates that perform better or worse exist with and without agentic coding tools. I've had positive and negative experience on both fronts, so I'd attribute the OP's experience to the N=1 problem, and perhaps to the model's jagged intelligence.
I work mostly in Typescript, and it's well known that models are particulary well versed in it. I know that other programming languages are less supported because the training data for them is lower, in which case models could be worse with them across the board (or some SOTA models could be better than others)
> Since [a few months ago], things have dramatically changed...
It's not like we haven't heard that one before. Things have changed, but it's been a steady march. The sudden magic shift, at a different point for everyone, is in the individual mind.
Regarding the epiphany... since people have been heavily overusing frameworks -- making their projects more complex, more brittle, more disorganized, more difficult to maintain -- for non-technical reasons, people aren't going to stop just because LLMs make them less necessary; The overuse wasn't necessary in the first place.
Perhaps unnecessary framework usage will drop, though, as the new hype replaces the old hype. But projects won't be better designed, better organized, better through-through.
I would think before. This would be one of the vaguely referenced previous places they had found to exploit (in Shroud). I think FenJuan appeared in Shroud as well, with a vague backstory that nevertheless seems consistent with this story.
Hardly seems mutually exclusive. Surely you should generally consider the reputation of someone who posts LLM-responses (without disclosing it) to be pretty low.
A lot of people don’t particularly want to waste time reading the LLM-responses to someone else’s unknown/unspecified prompts. Someone who would trick you in to that doesn’t have a lot of respect for their readers and is unlikely to post anything of value.
reply