A writer will never forget what adjectives, verbs, and nouns are. But if they use LLMs to write for them for years they will be worse at writing on their own.
Well, what I'm trying to say here is that coding is conveying logic, the way you'd evaluate it is how fit it is for its purpose, and if it's long term code, how well it will scale into future.
Now writing is something totally different. In some cases writing ability is not about writing, it's about your thoughts and understanding of life and human nature.
You could simply become a better writer without not writing anything by just observing.
If you are using an LLM to write, what is the purpose of that? Are you writing news articles or are you writing a story reflecting your observations of human nature with novel insights? In the latter case you couldn't utilize AI in the first place as you'd have to convey what you are trying to say within your own words, as AI would just "average" your prompt or meaning, which takes away from the initial point.
With code it's desired that it's to be expected, with good writing it's supposed to be something that is unexpectedly insightful. It's completely different.
I would disagree. If you only do X, in fact I think you will miss a lot of things that could make you better. You can become better writer by reading other great writings, if you only write yourself, you will not have the full big picture on what is possible. Then you can become better by thinking a lot, imagining a lot, etc... Same with most fields I would argue.
Although we were discussing about the decay of skill in something. While in some things the decay is super clear (as in running - pace, not the technique), I think there's many areas where there's no clear decay and other activities will actually significantly boost it, and any decay that there is, will be removed in just few days of practice or remembering.
Are we talking about observational ability, creativity, accuracy of communication or grammar here?
There's many more ways to evaluate a writer skill in terms of what they are doing vs what is coding. Coding can be creative, but in most cases you are not evaluating coding as writing, unless it's possibly technical writing, which is still different compared to coding.
I always compare AI programming to Google. If that's the case, then without internet, without Google, without Stack Overflow, my abilities would be worse than they were in 2000.
If my internet died in 2020 I would also be useless because probably I couldn't install/download all the libs/frameworks, etc.
But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.
Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.
Creative bit is figuring out two or more bits that might work together for something new. Labour part is combining that especially if it is actually laborious.
Which get to other possibility of having list of distinct things and then iterating over all pairs or combinations. Which I probably would not qualify as "creative" work.
>What's the worst potential outcome, assuming that all models get better, more efficient and more abundant
Complexity steadily rises, unencumbered by the natural limit of human understanding, until technological collapse, either by slow decay or major systems going down with increasing frequency.
why would the systems go down if the models are better at the humans at finding bugs. Playing a bit of devils advocate here, but why would the models be worse at handling the complexity if you assume they will get better and better.
Adding complexity to software has never been easier than it is right now, we really have no idea if the models will progress to the point where they can actually write large systems in a maintainable way. Taking the gamble that the models of the future will dig us out of the gigantic hole we are currently digging is bold.
It’s always been thus at lower layers of abstraction. Only a minority of programmers would understand how to write an operating system. Only a tiny number of people would know how a modern CPU logically works, and fewer still could explain the electrical physics.
> Only a minority of programmers would understand how to write an operating system. Only a tiny number of people would know how a modern CPU logically works, and fewer still could explain the electrical physics.
I'd say this is true for programmers at, say, 20, but they spend the next four decades slowly improving their understanding and mastery of all the things you name, at least the good ones.
The real question is whether that growth trajectory will change for the worse or the better.
To be clear, this is not an AI doomerist comment, because none of us have spent enough time with the tech yet. I've gone down multiple lanes of thought on this, and I have cause for both worry and optimism. I'm curious to see how the lives of engineers in an AI world will look like, ultimately.
Raylib is for hobbyists that want control over everything, but don't want to go through the hassle of dealing with DirectX/OpenGL. It isn't competing with Unreal/Unity at all.
>Right now, after pirating it, I have to find the author's patreon / something and contribute some money that way. It shouldn't be this hard to give someone money
Why not just buy the thing you are pirating? That would seem to be the easiest way to give someone money.
The thinking is the sold product is the inferior product than the pirated version and so rather than reward the people making it worse (Amazon, mostly), trying to reward the person who made something you want in the first place
To quantify that: If the author has self-published on Amazon, 35%-70% goes to the author. (70% above a certain price threshold and assuming the e-book is exclusive to KDP) If published via a publisher, the author is more likely to be getting 10%-15%.
To be fair though, a lot of publishers also do a one time payment deal with authors as well, after achieving a certain milestone of number of books sold.
The more common is an advance payment, but note that these are almost always offset against royalties, not in addition to royalties, and you need to be fairly successful before you'll ever earn out a full advance.
For 99%+ of authors, writing is a hobby that pays less than minimum wage. For self published authors it's often a net loss after costs like editors and cover design.
reply