> I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback.
That's by far not the worst that could happen. There could very well be an axe attached to the pendulum when it swings back.
> Not to mention it's bad business if nobody can afford to use AI because they're unemployed.
In that sense this is the opposite of the Ford story: the value of your contribution to the process will approach zero so that you won't be able to afford the product of your work.
We were going to have to reckon with these problems eventually as science and technology inevitably progressed. The problem is the world is plunged in chaos at the moment and being faced with a technology that has the potential to completely and rapidly transform society really isn't helping.
Hatred of the technology itself is misplaced, and it is difficult sometimes debating these topics because anti-AI folk conflate many issues at once and expect you to have answers for all of them as if everyone working in the field is on the same agenda. We can defend and highlight the positives of the technology without condoning the negatives.
I think hatred is the wrong word. Concern is probably a better one and there are many things that are technology and that it is perfectly ok to be concerned about. If you're not somewhat concerned about AI then probably you have not yet thought about the possible futures that can stem from this particular invention and not all of those are good. See also: Atomic bombs, the machine gun, and the invention of gunpowder, each of which I'm sure may have some kind of contrived positive angle but whose net contribution to the world we live in was not necessarily a positive one. And I can see quite a few ways in which AI could very well be worse than all of those combined (as well as some ways in which it could be better, but for that to be the case humanity would first have to grow up a lot).
I'm extremely concerned about the implications. We are going to have to restructure a lot of things about society and the software we use.
And like anything else, it will be a tool in the elite's toolbox of oppression. But it will also be a tool in the hands of the people. Unless anti-AI sentiment gets compromised and redirected such that support for limiting access to capable generative models to the State and research facilities.
The hate I am referring to is often more ideological, about the usage of these models from a purity standpoint. That only bad engineers use them, or that their utility is completely overblown, etc. etc.
It's just bad timing, but the ball is already rolling downhill, the cat's already out of the bag, etc. Best we can do at the moment is fight for open research and access.
That's by far not the worst that could happen. There could very well be an axe attached to the pendulum when it swings back.
> Not to mention it's bad business if nobody can afford to use AI because they're unemployed.
In that sense this is the opposite of the Ford story: the value of your contribution to the process will approach zero so that you won't be able to afford the product of your work.