> It seems a lot of large AI models basically just copy the training data and add slight modifications
Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.
I've gone back to shopping pretty much exclusively offline. Shifting through the garbage was too much work even before the AI slop flood. I'd rather pay a little extra to a local retailer so I know what I'm actually buying because it's right there on the shelf in front of me.
I would like to give a small defense of Benj Edwards. While his coverage on Ars definitely has a positive spin on AI, his comments on social media are much less fawning. Ars is a tech-forward publication, and it is owned by a major corporation. Major corporations have declared LLMs to be the best thing since breathable air, and anyone who pushes back on this view is explicitly threatened with economic destitution via the euphemism "left behind." There's not a lot of paying journalism jobs out there, and people gotta eat, hence the perhaps more positive spin on AI from this author than is justified.
All that said, this article may get me to cancel the Ars subscription that I started in 2010. I've always thought Ars was one of the better tech news publications out there, often publishing critical & informative pieces. They make mistakes, no one is perfect, but this article goes beyond bad journalism into actively creating new misinformation and publishing it as fact on a major website. This is actively harmful behavior and I will not pay for it.
Taking it down is the absolute bare minimum, but if they want me to continue to support them, they need to publish a full explanation of what happened. Who used the tool to generate the false quotes? Was it Benj, Kyle, or some unnamed editor? Why didn't that person verify the information coming out of the tool that is famous for generating false information? How are they going to verify information coming out of the tool in the future? Which previous articles used the tool, and what is their plan to retroactively verify those articles?
I don't really expect them to have any accountability here. Admitting AI is imperfect would result in being "left behind," after all. So I'll probably be canceling my subscription at my next renewal. But maybe they'll surprise me and own up to their responsibility here.
This is also a perfect demonstration of how these AI tools are not ready for prime time, despite what the boosters say. Think about how hard it is for developers to get good quality code out of these things, and we have objective ways to measure correctness. Now imagine how incredibly low quality the journalism we will get from these tools is. In journalism correctness is much less black-and-white and much harder to verify. LLMs are a wildly inappropriate tool for journalists to be using.
Yeah, “we just made shit up in an article, destroying trust in our publication, but we will get around to investigating when we have a little free time in the next week or so.”
No, you just shipped the equivalent to a data-destroying bug: it’s all-hands-over-the-holiday-weekend time.
Yes, hence “holiday weekend” in my comment. They posted an article that had fabricated quotes. When might it be appropriate to start investigating that problem, and work on ensuring that it doesn’t happen again?
I don't think you get to be this snarky about helping people understand things, when your initial contribution was to read "it's all hands over the holiday weekend time" and reply by saying it's a holiday weekend.
They did a stealth edit to not look as foolish. They didn't mention the holiday or weekend in what I replied to originally. That's why I snapped back with snark.
There was no stealth edit, you just didn’t read with comprehension the first time. Though the fact that you think those words weren’t there explains your weird reply to my original comment.
Others have given suggestions, but I'd also like to suggest evaluating what value you are actually getting from these devices. Would your life be made vastly worse if you just, didn't have them at all? They may have some real utility for you, in which case don't let me stop you from putting the effort in. But I think it's worth a few minutes to think about whether the value you get from these devices is actually worth the effort for you.
The one thing that keeps me going through the fall of the US is the knowledge that despite all, there are still lots of happy people in Russia and China. People live their lives under those single-party authoritarian regimes, and many of them are happy. Maybe I can be happy here, too.
“He gazed up at the enormous face. Forty years it had taken him to learn what kind of smile was hidden beneath the dark moustache. O cruel, needless misunderstanding! O stubborn, self-willed exile from the loving breast! Two gin-scented tears trickled down the sides of his nose. But it was all right, everything was all right, the struggle was finished. He had won the victory over himself. He loved Big Brother.”
It's a term-of-art that means to use the tools that are already available on the target machine. So rather than shipping a custom binary/shellcode/etc which exfiltrates data or whatever, you string together existing powershell/unix/etc commands to do so. It's effective because it's hard to distinguish these from legitimate processes.
The common thread that resolves this apparent conflict is, of course, billionaires. 100% of Republicans and ~60% of Democrats are in office primarily to serve at the whims of billionaires. They will pursue whatever policies will give more power to billionaires, consistency and hypocrisy are irrelevant.
How could you possibly come to this conclusion? Which party literally just voted for tax breaks on the wealthy and corporations, twice in one decade?!
In before "No clearly the party that helps the billionaires the most and is mostly comprised of billionaires and is backed by all the tech billionaires are the good guys, they are the true party of the people"
Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.
reply