Hacker News new | past | comments | ask | show | jobs | submit login

Any large communication platform has a choice: either accept some form of content neutrality or become a petty and chaotic tyrant constantly reeling from one public backlash after the other. YouTube made its choice. Now, random videos and channels get demonetized, content gets deleted for no reason and people covering basic news speak in code to avoid the wrath of the idiot AI. In the background Google publishes batshit crazy research papers that call automated propaganda "AI fairness" and relies on a horde of underpaid serfs bordering mental breakdown to make final decisions on content moderation. Welcome to the predicable future of your bad decisions.



> either accept some form of content neutrality or become a petty and chaotic tyrant constantly reeling from one public backlash after the other

No, there is a middle ground that Google, with its army of engineers, could implement in a weekend:

1. You stop trying to assume you know what advertisers want their ads to be displayed on.

2. You implement a basic, fixed (but can be expanded) ACL-type system based on categories such as "hacking content", "politically sketchy content", "sexual content", etc.

3. YOU LET THE GOD DAMN ADVERTISERS DECIDE FOR THEIR OWN GOD DAMNED SELVES WHAT KIND OF CONTENT THEY'RE OK WITH ADVERTISING ON.

4. You end up spending LESS on content moderator salaries, and end up with FEWER unhappy advertisers because THEY can align their principles with the content. Hak5/Sparkfun would be fine advertising on Linux Experiments. I'm sure MyPillow would be happy to advertise on a Q Conspiracy channel. The demand for this feature is unquestionably there.

5. You stop playing God and pretending that the concept of global "community standards" means anything at all in a world with 7 billion people and hundreds of thousands of disparate interests-based communities, each with their own disparate community standards.


2. You implement a basic, fixed (but can be expanded) ACL-type system based on categories such as "hacking content", "politically sketchy content", "sexual content", etc.

Who or what ensures that the Q conspiracy channel is properly categorized as "politically sketchy content" and not "hacking content"?

The reality is no amount of computer code will fix a human problem.


Your plan fails, lets say they implement what you do. Let say that company X advertises but only on the safe subjets. Then my immediate attack will be:

"Company X advertises on a website showing jailbait sexual content" or "Company X advertising on a site promoting Q Conspiracy".

You are going to fight an uphill battle explaining to people the naunces of the system, which is a losing battle.


Your comment doesn't make sense, Youtube still show those videos in the current system it just doesn't run ads on them. So if it was an issue then it would have already happened, the fact that it doesn't mean that it isn't an issue.


Again your dealing with absolutes here.

Yes there are still jailbait videos on YouTube, but Google is already heavily moderating and deleting videos. The more "extreme" ones are already being deleted and moderated, what is left is probably more of the tamer ones. The amount of jailbait videos or hate videos or whatever you see now is probably 1/10 or less of what would be there if it was a free for all.

What is being proposed is no moderation of content. The advertisers can choose what to show ads on, but that's it. In that case there would be a flood of these contents. Then its easier to attack them.


> You stop playing God and pretending that the concept of global "community standards" means anything at all in a world with 7 billion people and hundreds of thousands of disparate interests-based communities, each with their own disparate community standards.

I think you're confusing the Internet with YouTube. The Internet has no global content standard, but this is not the world that YouTube lives in. It lives in the world of ad-supported services which has been repeatedly very clear about its minimum expectations regarding community standards. See: https://www.google.com/search?q=adpocalypse


> YOU LET THE GOD DAMN ADVERTISERS DECIDE FOR THEIR OWN GOD DAMNED SELVES WHAT KIND OF CONTENT THEY'RE OK WITH ADVERTISING

Have you talked with advertisers? They're really twitchy about this stuff, they even have vendors for brand safety they'll want to include in their ads or have you integrate with if you're a platform like YouTube to rule out ads on anything that could show their brand in a bad light.

I don't use YouTube, but I thought this was what the demonetisation was - the creators were getting a trickle of revenue, but most of it was gone, sounds to me like the impacts of it being considered not "brand safe" and most advertises enable such controls reflexively.


> 2. You implement a basic, fixed (but can be expanded) ACL-type system based on categories such as "hacking content", "politically sketchy content", "sexual content", etc.

The problem: for some of these, the definitions, the legality status and the liabilities (especially around "politically sketchy" stuff) may differ wildly between jurisdictions. And you will always have trolls mis-labeling their content on purpose, or content that is to be classified as "gambling" in the US but not in Germany... the list of issues is endless.

> 3. YOU LET THE GOD DAMN ADVERTISERS DECIDE FOR THEIR OWN GOD DAMNED SELVES WHAT KIND OF CONTENT THEY'RE OK WITH ADVERTISING ON.

And then they will still have headlines "Youtube allowing Nazis, antivaxxers, incels and other threats to the general public". Not to mention the legal issues (e.g. Nazi content is banned in Germany/Austria, LGBT content in Russia, a whole boatload of stuff illegal in India with jail threats for local staff)...

> 5. You stop playing God and pretending that the concept of global "community standards" means anything at all in a world with 7 billion people and hundreds of thousands of disparate interests-based communities, each with their own disparate community standards.

You will always need some sort of "global minimum standards" that ideally is at least somewhat of a common ground in Western-allied nations. And that means: no Nazis/white supremacists, no Qanon, no antivaxxers, no incels, no adult content/gore, no drugs (tobacco/alcohol/illegalized drugs), no gambling, no glorification of violence.


Why Western allied nations in particular? I'm also not sure that list is as universal as you think. For instance, the no drugs thing would likely not apply to the Netherlands.


> Why Western allied nations in particular?

Simple: Western nations are a somewhat coherent cultural sphere.

Adding in India (with its current war against Twitter and anything that dares criticize Modi), the Arabian and other dominant-Muslim countries (women's rights, LGBT, democracy) or Russia/China (which are essentially dictatorships) into consideration would add way too much illiberality to be acceptable.


How do you know this could be done in a weekend, let alone a year, in a way that will make YouTube’s stakeholders happier than they are today? You seem to know an awful lot about this.

“There is always a well-known solution to every human problem—neat, plausible, and wrong.” —H. L. Mencken


The choice of "content neutrality" is pretty much keyword also for:

1. constantly reeling from one public backlash after the other

2. advertising dropping you due to controversial content

3. government coming after you for controversial content

If I were a business, I'd go content moderation all the way. Less blow black and more steady income.


Most of these platforms start off trying to be content neutral and end up adding more moderation as a result of how badly it ends up hurting them. There's only so many times advertisers are willing to have their brand shown next to child pornography (ie, reddit's former /r/jailbait) or hate speech.

If complete neutrality was an answer these companies would be doing that, since it's the cheapest option.


It seems to me that content neutral (neutrality? neutralness?) and copyright violations are there in order to build eyeballs and brand. You use them for growth.

Once the size is there, you curate. Profit becomes the issue and the tendency is to simply become cable TV with thousands of channels.

I wonder how you could architect video access for content that gets peoples' knickers in a twist but is still legal. There's loads of single points of failures still. TV settop box access, smart TV/Roku access, the difficulties and expense in storing and serving up video, etc.


> There's only so many times advertisers are willing to have their brand shown next to child pornography (ie, reddit's former /r/jailbait) or hate speech.

I’m not sure advertisers care about this as much as people claim. YouTube’s censorship really ramped up in 2017 after Trump was elected, and was fairly limited before then. I don’t think they had trouble with advertisers before then all those years. I could be wrong - have any sources that could help?


Many times advertisers honestly just don’t know. You’re spending a lot of money across a lot of different channels, and then all of a sudden somebody says, “uh-oh, we’re getting dragged on Twitter for advertising on $bad_page.” You definitely don’t like child porn or covid disinformation or anything like that, and the tweets make you look like an idiot, so you email the owner of $bad_page (some sort of advertising network, or maybe a site like Reddit) and say “if my ads are ever on this page again, I will pull my budget from your entire network.”


I greatly dislike the conflation of things like jailbait (which is mostly provocative clothed images taken voluntarily by teens) and classic child pornography where a 6 year old is brutally raped.

One is harmless the other involves lifelong trauma.


It's not necessarily harmless. JB includes nasty creepshots. And even if the subject took the photo themselves, it's unlikely they wanted a bunch of weirdos on the internet to lust over it. And if they somehow did want that lust, it's even more unlikely that they fully understood the consequences.

All of that can end up being very harmful to one's mental health. And that's not even accounting for the people who try to physically go after those children after seeing them and convincing themselves that they "love" the child.


That subreddit was exploiting children for sexual gratification. It wasn't just teens, and it wasn't people posting purposefully provocative images. It was children living their normal lives and having their pictures exploited for the sexual gratification of perverts.

Calling this harmless is absolutely disgusting.


It's Kafka-esque, really. Google has always thought it is smarter than everyone else and that has led, inexorably, to it establishing itself as the final arbiter of truth. Except Google is a dysfunctional, distracted, neurotic, schizophrenic entity, like all organizations, with an ever-changing set of in-fighting fiefdoms and warring executives.

Worse, growthism forced its "Organize the world's information" mission into "Swallow and monetize the world's information" with an added helping of "know exactly what every person wants, even if they don't know they want it yet."


> growthism

any day now, the moral, upstanding, self-restrained capitalists will buck the profit motive and save us from the unsavory sorts who rule our world today.


I feel we pretty badly need to revisit laws surrounding social media platforms in the US. I'm not sure what the silver bullet is here, but the way platforms have almost limitless latitude to filter and shape discourse on their platform and also virtually zero liability for that same discourse seems like an obvious problem.

Unfortunately the average senator is over 60 and the companies that own these platforms have deep pockets.

"It is difficult to get a man to understand something, when his salary depends on his not understanding it." -Upton Sinclair


In this case, looks like someone hacked authorization to the channel and deleted it. This is probably not "channel taken down for violating policy," but instead "channel deleted by 'owner.'"

The wording in the email from Google suggests they don't yet have enough info to trust the inquisiting email comes from the channel owner (source: my own experience proving email account A and email account B were the same person).


If we must live in a cyberpunk dystopia, I wish we could at least get sci-fi style "holographic" displays as a consolation prize.


Don't forget the police playing copyrighted music while they abuse you to stop you from filming them. When the police are ahead of the tech, you know the People are f'd.


And at least when they're playing a popular song with easy to obtain MP3s, you can use gnuradio to cancel that out of the audiotrack.

Yeah, they're being shitty piggies, but we have tech we can use too.


no - see above comment re illegal sweepstakes




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: