> I dont know why people would want to be in a community where they arent wanted.
This is standard predatory behavior. Child abusers hanging out with kids, weirdos hanging out near the women's clothing department, etc.
It's usually a clear indication of the sort of people you don't want to associate with in your online community. They bring a net negative to the table.
What is "it"? Putting the two halves together, the sort of people who want to be in a community where they aren't wanted are the sort of people you don't want in that community. I guess I can't argue with that.
They are talking about social norms. Inversely, "creepers".
Most adults understand why men should not, generally, be hanging out in the women's clothing department. When accidental violations of those norms are pointed out, they apologize and correct. Creepers, OTOH, gonna creep.
For their own well-being, online communities should police repeated violation of social norms. Otherwise the normals leave and creepers take over.
I spose. But labelling deviants (from the norm) and chasing them away is hazardous, because if you overdo it you end up with an echo chamber. How dare you talk to the people who others don't talk to, you traitor! Now you have to be ostracized too ... is how it might go.
(I can't help thinking of the Father Ted Christmas special, where a group of priests have to organize a quasi-military operation in order to escape from Ireland's largest lingerie department without a scandal.)
>And also, why Master and Visa haven’t came with a solution where they integrate with all of that and innovate?
There are a lot of people who integrate with them. The issue is that anyone who does so must comply with the PCIDSS. Most hobbyists cant stretch that far and use intermediaries.
>This idea that all they do should be de facto standard for the whole world is so démodé.
I dont know what Brazil's issues were, but Visa and Mastercard show up, they integrate with terminal providers directly and indirectly, and they bring a battle tested data security standard with them. Compared to other industries they are basically self regulating, and in some countries they adopt the PCI DSS into law.
My dad ran a crisis management consultancy for years. I just googled a few of his clients and they all survived the process. He would come in, assist with minor layoffs, repair business processes, usually get some software installed/updated back when that was a huge multiplier to a business and then leave when everything was running smooth.
I also am aware of a situation where a pair of business consultants who were meant to be assisting with a software project were diverted (at full rate 1200/day) to assisting with redecorating an office.
I was directly involved, oppositionally, to a pair of business analyst consultants who tried to get a customer of mine to change their (admittedly terrible) vendor selection by repeating security concerns over and over again in the meeting. They never actually got to the point of analysing said terrible vendors terrible integration practices or costing up a migration path. They just banged on about security and contacted us separately after the meeting asking for more details about the security situation.
Basically you get out of it, what you want to get out of it. It depends on the consultant, their education, and the terms of their engagement. I don't know if statistics would be useful in this scenario or how you would control for wildly different outcomes.
My guess: people would ask way more for that kind of information. That's why it's a totally immoral business: making people give away something they would never agree to give for free, if they were truly aware of it.
I dont know if its "evidence of AI" so much as "Evidence of laziness causing extreme public embarrassment"
Every good AI policy is basically:
1. You may use <supported LLM with enterprise data agreement>
2. You are still responsible for the quality of your output, customer facing embarrassment is your fault and will not be attributed to the technology.
In this case, the LLM was used to generate a reference table.
>“It seems that these references were generated and attached to the document after the fact, as they are not cited in the body of the text.
Like its just a retrospective justification for the content they have written, its not lazy editing it implies a complete lack of research, while fraudulently trying to imply the research was completed.
>shrug off around 600 AI content creator accounts monthly.
>I fear losing the battle.
I was in a small niche creative writing community for a while. Circa 2021\22. AI wasn't why I was there but I demo'd a few LLMs to a lot of the users in the Off Topic section because people were curious. Even with an explanation of how they operated, almost everyone was at least interested. One author told me how he operated similarly, rote learning how to write like his favorite authors by copying out their texts, hand written, word for word. Their concern was largely that they were too hard to use from a technical perspective.
These people knew I was there to learn, and that I was unlikely to ever try and publish LLM derived content. I said as much often.
Sometime in late 2022, a switch was flipped. And almost all of them started talking about how AI and those who used it were unambiguously evil. They didn't say my name, but they stopped engaging with me. Gradually, they started reposting twitter content from extremely anti AI people. Complained about AI submissions to various publications. Eventually, someone reposted a tweet calling for the death of anyone who used an LLM, with not even a single disagreement (and lots of encouragement)
I just bailed. I had only ever engaged positively, answered questions for the curious and tried to help people out. I posted one AI assisted story, and that was to demonstrate how my contributions were tracked vs AI contributions automatically in the editor to satisfy someones curiosity. Clearly highlighting the bits I had written. Just a technical demo. No one was asked to enjoy or positively engage with it as if it was human written.
A while later, most of their submission rules were updated with a new clause, if it was judged that AI written content was discovered, they would blacklist that person from all submissions across their entire community. Considering I had demo'd LLMs, and the uselessness of AI detectors, it was clear to me that these people would be able to justify blacklisting me if I poked my head up at all. I had been developing my own story for submission (myself, no LLM content), but I just dropped it. I just didn't feel like sticking my neck out for the witch hunt.
I also used to be quite engaged with blockchain. And it went through a similar process, most people ignored it until that paper about the power usage (Claiming it would spike to some level it never reached) and then suddenly being associated with it was an outrageous moral crime. But after a while, when it turned out that the power use claims were largely a nothing burger, people gave up on the hate parade.
I don't think you will "Lose the battle" (at least in terms of keeping AI users out). And its always ok for small communities to be selective about their membership. I just don't think its possible to maintain such artificial rage for more than a few years. The AI Datacenter water/power claims are a clear London Horse Manure problem that looks set to resolve itself, and the copyright issues will get sorted to some degree. Eventually I think you just wont care enough to ban anyone except low effort spammers (of which there are a huge amount, granted).
Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Blockchain turned out to be an absolutely awful payment method, so most people only know it as 1) a way to do crimes like ransomware, 2) a get-rich-quick scam, 3) some buzzword companies threw in everything, 4) the thing that made GPUs unaffordable.
AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Their opinion about AI or blockchain most likely has absolutely nothing to do with you. They are just seeing the world noticeably get worse, and are desperately trying to protect their communities from it in any way they can.
>Their opinion about AI or blockchain most likely has absolutely nothing to do with you.
Which is why I left before I was banned. I no longer felt comfortable and they probably likewise. They wanted a safe space to hate on people involved in AI art and my leaving contributed to that. That said, I doubt I could have posted content calling for the death of authors or honestly any other group in that space without being ostracised.
Its a bit like saying "A witch might have burned down their house, so their reaction against witches is understandable" maybe in abstract. But that doesn't mean the subsequent actions are acceptable.
> Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Yeah absolutely. These people in particular, at the time, really on experienced it through 2 factors:
1. They (like many people) posted a lot of their midjourney creations for a few months. (21/22 was like that)
2. They saw an increase in low quality submissions.
So gripes about AI art and low quality submissions seem perfectly valid.
>Blockchain turned out to be an absolutely awful payment method
>AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Yeah so I am not complaining about people having negative opinions, I was sort of talking about the over meme, the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time. Basically used like a thought terminating cliche. I have problems with crypto, and I like things about crypto. I can have a great conversation with most people, but for 12 months or so, you couldn't have a conversation without people loudly shouting about how the power use was going to destroy the environment and that it was going to use X% of the power by Y date. They didn't want to talk about it, they had been given evidence that the discussion was over and everything was solved in favor of their beliefs. The AI debate has now roughly arrived in the same place, there's no longer really a discussion, but the zeitgeist has this one single mode that's constantly debated. To the point where you could be running a local LLM trained only on data from the 1800s and you can still be considered to be responsible for some data centre single handedly draining a lake.
My point is, like crypto, this fixed idea will eventually erode and the hate train will move on. People with well thought out negative opinions are still going to exist past that time, they just wont have people screaming at fever pitch about it constantly.
You didn’t like the broader consensus views towards llm usage but that doesn’t mean it wasn’t ultimately a positive to their community that you left. It sounds as though there was a mismatch in what you and the broader group wanted so perhaps a non-confrontational split is the best that could be hoped for in this situation?
> They wanted a safe space to hate on people involved in AI art and my leaving contributed to that.
Once again, I have to ask, why do you think that that is what they want? Maybe they want human generated content?
> the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time.
Understandable, though. Why discuss the pros and cons of $FOO when you're drowning in it? All you want it to stop the drowning.
Yeah I’ve seen similar anti-AI-gen witch hunts crop up over the past few years - in artist and even non-creative communities, and it’s such a disappointment every time. My respect for people who engage in moral panic absolutely plummets, it’s cringe and I wish there was something I could do about it, but it take so much work to try to deprogram someone who’s been demagogued.
As you say, unfortunately, sometimes the best thing to do is to just wait it out, and hope that shouting into the echo chamber is all they end up actually doing about the ‘problem.’
I don't know why not wanting AI-generated content to swamp communities is a moral panic that people need to be deprogrammed from. Seems like a legitimate concern.
I saw your genuine post and upvoted since it seemed unfair.
I am mostly against all usage of LLM because the treadmill is moving too fast. But if it's thoughtful usage and a lot of tinkering I might change course. As such that includes other things than LLM obviously.
I'm not angry, you just seem to be taking a very self-centered view on the general vibe in this specific forum you mentioned, and are interpreting general anti-AI/blockchain sentiment as personal attacks.
Its more like, here are the decisions I made while being in the position of being on the outside of sentiment, and the timeline of that changing sentiment.
The only thing I really took personally was the call for death, and that was me making a decision to leave in favor of my mental health.
This is entirely vibes based on reading research on similar campaigns so I cant pull a paper with hard evidence about this specifically. But I believe chinese/North Korean infowar campaigns are behind these seeded talking points. They seed in these far left activist communities and then once they find one that sticks the real people in these communities start carrying the message out to other communities and then the CN/NK botnets amplify the messages and suppress the responses. They dont just do this on the left im just highlight left for this specific point.
The data doesnt purport to cover any more than the 1 website. Its not like there are any generalisations about other websites derived from the data. Its just "These are where my hits come from"
reply