I feel like all of this debate is painfully missing the point. The problem is not some content moderation policy, the problem is that social media has changed social conversations from small local interactions into monstrous virtual fight clubs between millions of people simultaneously, and where the most extreme opinions are rewarded with the most attention. Boring level headed opinions used to at least have a chance of rising above the noise. Not anymore.
The other problem is that the business model of social media is based on generating "engagement" at all costs, so the platforms are built to encourage outrage as it generates lots of engagement, among other addictive behaviors (the infinite "algorithmic" feed for example). Social media was supposed to be a tool that serves people but its current business model encourages it to work against people.
There were plenty of other technologies that could've been used to organize large-scale virtual fight clubs (forums, BBSes, chatrooms, maybe even the telephone) but this didn't happen because nobody actually wanted to foster such toxic behavior.
> nobody actually wanted to foster such toxic behavior
I think no one realized you could make disgusting amounts of money by fostering it. There have been plenty of flame wars on BBSes and forums, but the tension between "engagement" and "quality" always favored the latter in small, private communities (HN is a good example). However, when it comes to Facebook and Twitter, the former is always favored (due to market forces, shareholder interests, etc.).
Observe flamewars on BBSs and forums tended to drive people away, thus tending to confine them through natural mechanisms. Our current systems encourage and fan them, and by profiting from them, create a system that sustains them indefinitely instead of naturally isolating them to just the people who actively want to participate.
As someone who has a low-grade hobby of kind of keeping track of this sort of thing, I think one of the major challenges of trying to structure communities is all the most powerful drivers of what is going to happen occurs in these low-level decisions that create persistent second-order effects on the community's nature, and this is one of the somewhat unusual cases where the second-order effects utterly dominate the first-order effects. In this case, you can read "first-order effects" as "the things the developers intended to happen", which I think are dominated by the structural effects of decisions that aren't intentional.
If you create a community that literally profits from conflict and flamewar, you'll never be able to fix them by any number of systems you try to add to fix that via first-order effects. The underlying structure is too powerful. Facebook can't be fixed with any amount of "moderation"; the entire structure, from the very foundation, is incapable of supporting a friendly community at scale. Until Facebook stops profiting from engagement, they will never be able to fix their problems with toxicity, no matter how many resources they pour into naively fighting it via direct fixups.
(Now, I have questions about whether a friendly community of the scale of Facebook is even possible: https://hackernews.hn/item?id=20146868 . But the fact it may not even be possible does not mean that Facebook is still obviously not that solution.)
I'm always reminded of this quote from Larry Wall, the creator of Perl:
"The social dynamics of the net are a direct consequence of the fact that nobody has yet developed a Remote Strangulation Protocol."
It's tough to get all this right. Humans are ok at dealing with one another in person most of the time, but being behind a screen really doesn't happen in many cases.
I have no idea what the solution is... it's a people problem, so probably not a strictly technological solution.
HN works fairly well because of the hard work of the moderators.
FB is kind of maddening to me. You can't just put it in a 'good' or 'bad' bucket.
Some things I get a lot of value out of:
* Being able to keep track of acquaintances from all the places I've lived. There are a lot of friends and family I have in Italy that I can't see often, and I do enjoy hearing what they're up to.
* As a tool for organizing it's been a very handy, low-friction way to get people involved in some political issues where I live in Oregon.
On the other hand, lately it has also been a source of stress. The sheer amount of anti-science, poorly thought out political comments and plain hatred is really depressing at a time when a lot of things are not going well.
I dislike this trend (especially in Silicon Valley) to blame problems on the users - e.g. creating a startup, and becoming frustrated with users when people use it "incorrectly." Technology is supposed to be used by people, not the other way around. When technology is using people for its own interest (in this case, ad revenue), then we have a real problem with the technology, and it is absolutely not a people problem.
Only 80 years ago, plain images on posters could be used to motivate people to die for their country in World War II. 400 years ago, images were so rare that it was enough to paint church walls with them to fill people with belief in God and afterlife. And as of the last ten years, we're suddenly expecting people to drop their belief in images and use their "rational" logic to see through fallacies, saying it's a "people problem" when they can't? It's just too fast for evolution, and the onus is on the ones who create the technology that disseminates images to be careful, lest they create the perfect conditions for a society to fall apart because they were too busy looking out for their bottom line.
>The sheer amount of anti-science, poorly thought out political comments and plain hatred is really depressing at a time when a lot of things are not going well.
On this front, I prune my contacts when my feed starts stressing me out. It used to be you had to totally unfriend someone, but Facebook wised up and now you have a variety of options. You can put them on a 30-day timeout so their posts won't show up on your feed while they get their rant on, or you can unfollow entirely while remaining friends (so you can still actively check on them but won't get passively bombarded with dumb stuff). You can also opt out of seeing content from specific sources they share if the only problem is they're sharing dumb links.
It's still not perfect, but keeping in touch with people who post dumb stuff is always gonna be a balancing act and Facebook's come a long way in facilitating that act even though most of the options are not obvious (most of the above are found in the ellipsis icon in the upper right of every post).
"HN works fairly well because of the hard work of the moderators."
Moderators and size limits, the latter keeping the amount of work small enough that a couple of moderators can handle it, and aren't getting subjected to the sort of stuff Facebook moderators deal with. Obviously, not having images or video also helps that. (Though I recall some times when Slashdot trolls were taking some good swings at Can't Unsee even with those limits.)
HN is on the upper end of what a community structured in the way it is can handle, I think, and it has taken some tweaks such as hiding karma counts on comments. I'm not deeply in love with reddit-style unlimited upvote/downvote systems... in their defense, they do seem to scale to a larger system than a lot of alternatives, but it comes at a price. I do fully agree it tends to create groupthink in a community, as a structural effect, though I think that's both a positive and a negative, rather than a pure negative as some people suggest. Some aspects of "groupthink" become "community cohesion" when looked at from another point of view.
Never thought about it that way, but maybe that's why a reddit-style karma system does tend to hold relatively large communities together.
But even as one of the more scalable known systems, it still breaks down long before you hit Facebook scales, or "default Reddit subreddit" scales.
> There are a lot of friends and family I have in Italy that I can't see often, and I do enjoy hearing what they're up to.
In the old days we had to actively do that using letters, or emails, or phone calls. I think it was a better system because it forced you to choose who you cared enough about to stay up-to-date with. Minimalism isn't just about things, it's also about relationships.
I only have so much time, and FB makes it easier to keep in touch with more people. Sure, I'll find the time for really good friends, but it's a benefit to be able to keep in touch with more people who I enjoy having in my life.
If I was king of Facebook (or a social media company that had its network) and I could change things for users without worrying about the company's revenue I'd do two things.
1. No links to outside content.
2. Mandatory deletion of all historical data with a max retention option no longer than 1 year with a default of 30 days. (Let users pick the 'fuse' length within this time frame).
I think that would double down on the things I like about it (keeping tracking of acquaintances like you mentioned, handling events, etc.) - while also removing a lot of the things I don't (arguing about news, targeted ads based on historical data).
That said I'd just like to make it easier for people to control their own nodes (https://zalberico.com/essay/2020/07/14/the-serfs-of-facebook...), but I also recognize that getting the social element to work in a federated way is not easy. Maybe Urbit will pull it off eventually.
I kinda wonder about just having a no politics policy. Allow user reports... If N users report a post for being political, or it triggers some regex, penalize the post in some way. Facebook doesn't have to be about politics; Instagram wasn't for a long time, but with the onset of the BLM movement, I've seen my timeline filled to the brim with politics, and at one point, people were even saying that no one was even allowed to post non-political content because it shows how privileged you are to be ignoring the movement. I don't use Facebook for the politics... I just want to know what's going on in my friends lives. You can still have engagement without politics.
While that sounds appealing, that just moves the goalposts. Now, who gets to define what's politics? Is mentioning global warming politics? Is advocating for wearing a mask during COVID-19 politics? And the meta-discussion of what content constitutes politics is also inherently political.
As for user reports. I would expect the same kind of dog piling you see now with people flagging people/brands/content they don't like politically as "politics". Post a picture of a The Origin of Species? Politics! Post a link to Chick-fil-a? Politics! Etc.
Ultimately, politics aren't the issue. Not having clear, consistent & enforced rules as well as no consequences for breaking them is the problem.
People aren't encouraged to think twice before they post because there's not going to be any significant consequences for breaking the rules.
Even if you somehow manage to get permanently banned from a social network, it's very easy to come back; it doesn't cost anything besides spending some time creating a new account.
From a business perspective it makes sense - why would you ban an abusive user that makes you money? Just give them a slap on the wrist to pretend that you want to discourage bad behavior and keep collecting their money.
Proper enforcement of the rules with significant consequences when broken (losing the account, and new accounts cost $$$ to register) would discourage a lot of bad behavior to begin with.
You could then introduce a karma/reputation system to 1) attach even more value to accounts (you wouldn't want to lose an account it took you years to level up and gain access to exclusive privileges) and 2) allow "trusted" users beyond a certain reputation level to participate in moderation, prioritizing reports from those people and automatically hiding content reported by them pending human review (with appropriate sanctions if the report was made in bad faith) to quickly take down offensive content.
you can't solve social problems with technology, only policy. companies like Facebook should be broken up and regulated such that the whole model of profiting from social division is removed. this would be highly beneficial to society and is an appropriate role of government.
> There were plenty of other technologies that could've been used to organize large-scale virtual fight clubs (forums, BBSes, chatrooms, maybe even the telephone) but this didn't happen because nobody actually wanted to foster such toxic behavior.
“Nobody” is leaving out some pretty notorious sources of toxicity (e.g. 4chan). I think a key difference is that these huge platforms dramatically increase the reach of those communities by giving them much better tools, highly-available servers, etc. and in particular mainstreaming them into the same place everyone else is, making it easier to recruit and share outside those communities.
In the forum era, people had to learn about a particular site, learn the community, maybe create an account, etc. to know these existed — now it's just one Facebook share away and there's an advanced “engagement” system ensuring that anyone who likes something widely shared will continue to see other content from the same source without needing to seek it out. Brigading was most noticeable with a wave new of new accounts and there wasn't something like an ML system making that activity drive unrelated users to see it.
4Chan is only 4 months older than Facebook and is hardly that profitable. Which is what changed, online dumpster fires where not attractive to advertisers but add a veneer of social networks and some basic location / demographic data and suddenly things change.
Facebook really had two periods because it was limited scale prior to mid-2006, when the number of users grew dramatically.
I think the more important point is really the advertising angle: taking a bunch of small communities and giving them high-quality hosting powered by a billion dollar company along with a pre-developed audience.
You could say that some topics discussed on 4chan are "toxic" but is the discussion in general also toxic? There are no cliques, no friends and foes, no flame wars, just a background level of ad-hominem (i.e. very formalized and short phrases used to express disagreement with the message in the form of ad-hominem). There is no point in slinging insults at other users and no "honor" to protect - everyone is anonymous, there is no identity, not even a pseudonym or post history. It's incredible how much of the discussion (on-line and in real life) is about asserting one's status and preventing others from asserting theirs. Anonymous image boards are free from this.
Not every message is terrible, yes, but it’s not without cause that it has such a bad reputation. The idealized version you describe is far from representative.
They certainly fostered toxic behavior. The social interactions we see on social media—even in HN from time to time—are familiar to anyone who frequented forums in the past. The ad-supported business model industrialized it, though.
My point is that in most cases it wasn't the desired behavior, just sometimes tolerated if they were high-profile users with connections with the moderation team and/or otherwise provided valuable contributions.
Yep, generates outrage, feeds on it and turns it into money. The added insult is that advertising money used to be used to fund investigative journalism and real community news coverage at local newspapers in every city and town in the US. Anyone know what that money is spent on now?
> I haven't seen evidence that outrage driven engagement is profitable, though the statement is frequently repeated.
Let's assume that you are able to create a social network the scale of Facebook and network effects aren't an issue (let's imagine a magical solution that interoperates with existing social networks in such a way that accounts work on either side and content is visible from both sides) with the caveat that you instantly ban accounts that participate in political flamewars, intentionally spread misinformation/fake news, etc. You're going to end up banning a significant chunk of people that would otherwise make you money if you just turned a blind eye to the problem like current social networks do.
Is it true that failing to ban is the source of the divisiveness? I haven't used Facebook before, but it seems like the mechanism is the feed, which prioritizes divisive content.
Even as a relatively senior employee, you may be unable to discern intent. You might just get an incentive scheme that compensates you for engagement rather than profitable engagement.
The feed brings visibility to offensive content, but ultimately someone has to create that content in the first place. Even if you kept the feed as-is, as long as you had bulletproof moderation that would nuke any offensive content (or other mechanisms that are effective at discouraging people from posting such content), there wouldn't be any bad content for the feed to recommend.
> The other problem is that the business model of social media is based on generating "engagement" at all costs (emphasis my own)
Well, FB and the like have been bringing the ban hammer out a fair bit. Most recently was the QAnon kiddos.
If they really were keeping engagement up at all costs then banning a lot of accounts and dumping all that hand-wringing would be a strange thing to do time and time again.
Yes, I think some people are becoming uncomfortable with unfettered free speech because the distribution model has changed in ways that are difficult to control and which are elevating what some people view as the "wrong" messages.
Prior to this era in which we're constantly online and connected, for a message to be heard it usually had to go through a professional editor at a newspaper, radio station, publishing company, or TV station. If you had an opinion you wanted to share, you had free speech but you probably didn't have a platform. Now messages are global, immediate, viral, and permanent.
Max Wang's video makes a strong case that Facebook's content policy and Zuckerberg's concern with adherence to it is a stringent ideology which lacks the flexibility to address clear cases that it should cover but doesn't. In other words it lacks good judgement.
I think it's going to be a long time before an automated system can exercise good judgement. Obviously having a professional editor review every tweet is impossible. So in the meantime, I think we will see a continual elevation of tension between people who value free speech at any cost and people who think that certain messages shouldn't be given a platform.
> I feel like all of this debate is painfully missing the point.
Your point also applies to news -- articles these days miss the point by design. You get more clicks by being provocative and hyperbolic. And yellow journalism isn't really new, but its comeback has been triumphant.
Along the last few weeks, Bing's news feed, the one that appears directly on the main page of bing.com, turned into a tabloid cesspool (at least in its French version). Gradually but very quickly, it turned from something reasonable, rather faithful to the range of important events (there could be one 'people' entry here and there, but why not, after all, as long as it remains very minor) into pure clickbait. The titles are clickbait ('incredible', 'exceptional', etc.) and the content behind is crap. It is possible that they put first, the ones that already get more clicks.
If I translate the first entries titles right now:
* Amazing confidences -> an actress
* Big controversy -> a female singer
* Corruption suspicion -> OK, a politician
* Accused of lying -> an actor's wife
* a few trivialities and filler items
* Finally calm -> a dead singer's wife
* Turkish activities -> OK, international news
* from the previous item on, it is only now that you have a regular mix of normally interesting/important news items. You had to click once or twice on the Next arrow to get there, everything or almost everything on what I'd call the front page was clickbait crap.
So, that's a feed, but the content comes from actual newspapers, and if I go to my local newspaper, half of the top content is also oftennational/international 'people' crap.
That's hardly surprising. Both social media and regular media are now part of the same competition to get more clicks. The whole model of online ad-driven content is rotten and creates the worst incentives.
It's not so simple: social media is how regular media get most of their clicks. Not that I'm a huge fan of social media companies, but the constant push by regular media companies for social media companies to promote "authoritative sources" (i.e. regular media) and the accompanying hit pieces make a lot more sense in this context. To me, these articles are being honest in their criticism when they call for less social media in general, not just more control of the existing social media. This is one of the latter.
I have a theory, that many of the high-profile "advertiser boycotts" recently aren't really about content moderation. I suspect that companies simply don't want to promote their body wash in the middle of a constant screaming match about wearing masks.
Being on Facebook bums me out. It makes me dislike my extended family, and old colleagues or people I knew from school. I've been unable to completely wean myself off it (the "engagement" hooks are strong), but I genuinely feel dirty and unhappy with myself after browsing the site. It certainly doesn't foster a positive view toward brands that I see advertising there.
I wonder if: (1) that perspective is common, (2) large advertisers have figured it out, and (3) their "boycott over hate speech" is really just a clever way to get social-trend-extra-credit for a move they'd like to make anyway?
One of the canned responses we have observed Zuckerberg make to the media in response has been something along the lines of "Facebook gives people who would not otherwise have a voice a chance to be heard" (words are mine, not theirs).
The web already did that. By advocating that the world use only a single website to "speak", we allowed a single website owner to create a business by charging its users for the opportunity to publish their "speech" to segments of the website's audience.
The reason most of those millions of people "joined" Facebook was not to voice their opinions to an audience. It was to communicate with family and friends. A new mode of communication.
If those people who wanted to voice their opinions to audiences had their own websites, who would visit them? Would they have an audience of millions?
(The term "millions" is used here as an expression for "many", not as an accurate estimate of the numbers in question)
I have heard that there is talk in the US media of doing away with the US Postal Service. AKA Ye Olde Mode of Communication. It is unsettling how far things have come in eroding the American's right to be free from commercial or ideological pandering. There used to be a legally recognised principle that an American could say "No more", and then the pandering must stop. Try that with Facebook or any other "tech" company.
You are right, that is the central point. We have not been able to scale the small scale interactions our species evolved with to include thousands, let alone millions of people. Perhaps its impossible to do this, with brains such as ours. Nevertheless, the toxic result we get when we do try to do it is incredibly engaging, which is all that is required for the brokers to make money. This is not very different from a drug addiction. In both cases the victims are compelled, despite themselves, to engage in something that is harmful while the drug-dealers have absolutely no incentive to reduce or eliminate the damage.
I don't think it's painfully missing the point, it's just studiously ignoring that they've let the cat out of the bag and focusing on ways to mitigate the problem.
Social media is a natural consequence of increased communication of larger groups, and as an emergent phenomenon based on everything else you aren't going to get rid of it. So instead we focus on how to mitigate the problems and live with it, because even if you shut down every single social network today, there would be new popular ones used by almost everyone a year from now.
I am developing a site where the landing page only shows news about topics from sources you have chosen to see. There will be some limited interaction with those news stories. On a shared page, everything is brought together and you'll see what news sources other people are reading.
I am hoping this allows people to get the news they want, interact with it and see what news other people are interacting with... without the flame wars and hate brigades that Twitter and FB prompt
Google Reader had a limited kind of social networking where you could comment on the articles and your friends saw the comments if they were subscribed to the same feed, but there were no replies to comments and they never got outside your contacts. No popularity contests with Like buttons or similar.
Yeah, not too different. No user submitted links though and no comment system (at least when I first make it live). To be honest, I'll probably be the only user!
Satanic ritual panic and several other deeply moral panics say hello. Loud and extreme opinions and bullshit have always been rewarded with more attention because the adherents are more passionate.
Personally, I think the retweet model is slightly more problematic than the upvote model. For one, upvotes have an equivalent downvote (usually), which a retweet doesn't. If a famous account upvotes a post, and a I downvote a post, they basically cancel each other. If a famous account on retweets a post, then millions more will see it thanks to that one button click, and nothing I can do balanced that.
Facebook also thrives on a similar "re-share" model, which leads to the most toxic content spreading extremely quickly and bouncing to new audiences, whereas no matter how many times I upvote this thread, only a few % of the world browses HN.
It seems to me HN rewards views that are "interesting" but the most highly rated comments seem to be pretty mainstream nerd stuff. Do you see evidence to the contrary?
This is exactly the point I'm trying to explain to my friends that are outraged that Facebook didn't censor Trump's post. I think Facebook is one of the worst companies out there, but for other reasons.
The issue with Facebook is not about censoring the post that we don't like, it's about removing that polarization that comes with a platform that wants to get you hooked so they can sell you more ads.
Facebook and other social media are making us HATE each other because that's what they make money on. The debate should be around how can you build an ethical social platform that values your time and opinions.
I do agree that the ad-based business model is designed to have us hate each other because that is what drives engagement. So I'd like to get your thoughts on this thought experiment. What if it was illegal for social media platforms to base their revenue stream on advertising? Instead, it is mandated by law that their primary revenue is based on a subscription model? "Want to use Facebook? You'll need to pay $4 a month to use it." What do you suppose the nature of Facebook would be like then and, to an extent, its impact to society at large? Perhaps better? It has been my impression so far that services that require paid subscriptions over free usage tend to respect privacy better, have higher quality content, and are less addicting in usage.
I would argue that the problem started with the introduction of “youth marketing”. Marketers saw “getting them early” as a way to have a customer for life and therefore make more money overall, and they dove in.
This resulted in prioritizing those things that younger people are drawn to, such as novelty and attention.
Imo, this resulted in a huge cultural shift that has now resulted in “boring level headed opinions [based on years of experience]” being lost in the sea of people fighting for more attention through more novelty.
The internet just threw rocket fuel on the problem.
Unfortunately this doesn't work unless a massive group of people does it, but: stop sharing things that make you angry. They make you angry so that you share them.
So much toxicity is rooted in exploiting this dynamic.
> Boring level headed opinions used to at least have a chance of rising above the noise.
... Because they were heavily promoted by the people who owned the media platforms. It wasn't because of any of their particular virtues.
Consider your bias when making this claim. What makes an opinion boring and level-headed? That it is squarely in the middle of the Overton window? Who sets the Overton window, in the pre-social-network world where the overwhelming majority of media is state/corporate-sponsored?
It's not some natural evolutionary process that caused boring, level-headed opinions to rise to the top. It was self-serving opinions rising to the top, that, because of their ubiquity, are believed to be boring and level-headed.
When viewed from a different cultural lens, many of those boring and level-headed opinions appear completely batshit insane. For example, the current American model for healthcare seems, for the most part (Everyone has a few minor tweaks they want to make - but setting that aside), boring and level-headed and reasonable to ~half the country. To the rest of the world, it appears completely insane, and anyone proposing switching to it would be a complete lunatic. In the US, however, the converse holds - switching away from it is considered by many to be absolutely crazy.
Which of those opinions is the reasonable, level-headed one? It depends on your current social setting, not on the virtues or drawbacks of the expressed idea.
The policy and the design of the interface is inextricably linked to that change in interactions. A hands-off policy is still a policy for a site that rewarded more inflammatory content in pursuit of certain metrics.
Amen. The thing that's funny is it seems relatively straightforward to measure the positive and negative interactions and promote appropriately and transparently.
It's more than just the toxicity of the conversations. These vapid online brawls have engaged millions more people to vote who never would have voted in the past.
Hence, introducing an easily swayed group of uneducated voters into the pool, and debasing politics to their level.
If democracy represents the intelligence of the average voter, then engaging literally tens of millions of uneducated/emotional/immature people to vote significantly diminishes the ability of our government to make decisions and/or speak about problems honestly.
> These vapid online brawls have engaged millions more people to vote who never would have voted in the past.
Voter turnout in presidential elections is the same as it has been for 50 years at least. Voter turnout in the most recent midterm spiked a crazy amount, but I'm not willing to credit social media so much as a natural reaction to the extremism of the current administration. Which sources are you using that I missed?
I don’t really see anything very different there. Voters have always been easily manipulated: by newspapers, TV, whatever. Though I suppose one key difference is that the source of that manipulation has been hidden: you used to know what newspaper pushed what, now actors hide behind sockpuppet accounts and promoted ads.
I don't see us as having created a representative democracy to protect democracy from the masses. I like to think we are a representative democracy because the masses are supposed to have better things to do than keep the lights of government on.
It's pretty well documented in the Federalist Papers. I recommend reading them in their entirety, because it's IMO one of the most profound writings of political exposition in written history, even if you disagree with the philosophies being promoted.
Federalist No. 10[1] is probably one of the most highly regarded of the papers, and explicitly lays out these concerns. Selected quotes:
"AMONG the numerous advantages promised by a well-constructed Union, none deserves to be more accurately developed than its tendency to break and control the violence of faction. The friend of popular governments never finds himself so much alarmed for their character and fate, as when he contemplates their propensity to this dangerous vice."
"Complaints are everywhere heard from our most considerate and virtuous citizens, equally the friends of public and private faith, and of public and personal liberty, that our governments are too unstable, that the public good is disregarded in the conflicts of rival parties, and that measures are too often decided, not according to the rules of justice and the rights of the minor party, but by the superior force of an interested and overbearing majority."
"By a faction, I understand a number of citizens, whether amounting to a majority or a minority of the whole, who are united and actuated by some common impulse of passion, or of interest, adversed to the rights of other citizens, or to the permanent and aggregate interests of the community."
"If a faction consists of less than a majority, relief is supplied by the republican principle, which enables the majority to defeat its sinister views by regular vote. It may clog the administration, it may convulse the society; but it will be unable to execute and mask its violence under the forms of the Constitution. When a majority is included in a faction, the form of popular government, on the other hand, enables it to sacrifice to its ruling passion or interest both the public good and the rights of other citizens."
"From this view of the subject it may be concluded that a pure democracy, by which I mean a society consisting of a small number of citizens, who assemble and administer the government in person, can admit of no cure for the mischiefs of faction. A common passion or interest will, in almost every case, be felt by a majority of the whole; a communication and concert result from the form of government itself; and there is nothing to check the inducements to sacrifice the weaker party or an obnoxious individual. Hence it is that such democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths."
Hamilton also talks a little bit about this phenomenon in Federalist No. 68, and Federalist No. 62 lays out the impetus for having an equally represented Senate (which is inherently undemocratic), part of which was to check the immediate impulses/passions of the people, as can be the case with the House of Representatives.
I loved Max Wang’s point that Facebook has chosen to focus on political engagement over social engagement. Like many people who used FB early, I remember when it was more about friends than allies (and opponents).
Facebook profits massively from the politics of outrage, and for Zuckerberg to claim that Free Speech is a moral imperative is for him to take a Strategy Credit[0]. While I strongly believe that Free Speech is a moral imperative, I don’t believe that Facebook makes decisions on a moral basis.
Facebook makes decisions on a profit basis and by that metric is meteorically successful. These Facebook employees that spent years collecting their $500k pay cheques might want to consider this before posting their videos criticizing the company's direction
My argument is you can't have it both ways. You want to collect $500k salaries you're going to have to live with your employer pursuing immense profits
I think the idea is we need diversity in online social media. I'd like to see human editors come back.
Here's how Benjamin Franklin dealt with his social media platform:
"In the conduct of my newspaper, I carefully excluded all libelling and personal abuse, which is of late years become so disgraceful to our country. Whenever I was solicited to insert anything of that kind, and the writers pleaded, as they generally did, the liberty of the press, and that a newspaper was like a stage-coach, in which any one who would pay had a right to a place... "
On the contrary, I regularly read articles and watch videos edited by "their guys". I don't have much interest in listening to people scream on social media, but I'm genuinely happy to learn the perspectives of people who don't think the same way as me.
A consequence of this is that you may find yourself changing your views-- or at least being more open minded about things your friends don't approve of.
If it's just politics with no direct relevance to your life-- are you really better off that way? Is being more right or just more open minded about something that you have no influence do you any good if it alienates you from friends or family at all?
Being more open minded allows me to maintain a friend group who's similarly open minded, and wouldn't ostracize me for holding opinions they don't approve of. From what I've seen this strategy works for most people, and life seems like it'd be really stressful otherwise; if I for example got a stalker and needed to start carrying a gun, I wouldn't want to be in a situation where my friends will abandon me if they find out.
I of course don't begrudge anyone who feels they have to keep a closed mind in order to maintain a reputation in their community.
> I think the idea is we need diversity in online social media. I'd like to see human editors come back.
People have tried. Either highly-moderated networks or "middle of the line" media. Other than a few exceptions (Real Clear Politics comes to mind), they've all failed. It's unfortunately clear that people don't want this.
That's the key insight. It's not Facebook's or Twitter's decisions that have lead to a divisive discourse.
It's that given an option to express themselves on social media, people like expressing strong tribalism the most. And then, all you need is to add even a hint of recommendation engine boosting more engaging content to the top (a very reasonable idea, heck, HN does that) and you have a vicious cycle.
Yes, but that is a feature of the human race which is exacerbated, when it should be sublimated, by the way that FB works. Long, thoughtful posts are discouraged through lack of proper editing tools, deep narcissism is nurtured via "likes", algorithms target the lizard brain rather than the neocortex. Agency in controlling your feed is minimized. FB encourages, manipulates and monetizes our worst instincts. I hope that the fine people working there can make some changes when they finally save enough money to buy back their conscience.
While it's not a social media platform of any kind, I know Apple News has some of their main user-facing pages actively curated by humans they've hired directly from editorial positions at various traditional media outlets.
I thought it was an interesting idea when it started. I'm not sure how much of a real impact that's had, though.
I mean, isn't that approximately what's occurring that some people ("writers") are up in arms about, except instead of "libel" and "personal abuse" it's things like "hate speech"?
Facebook is not a "social network" in any meaningful sense of that word. It's a largely unmoderated narrowcast platform accessible to anyone willing to put down the money to use it, regardless of motivation, as long as it isn't obviously pornographic.
You see people trying to use it that way, while wading through a morass of paid advertising and promoted memes. I can use a brick as a hammer, too, but that doesn't make the brick a hammer.
Why do so many people forget that they control their feed? Maybe less than some of us would like, but I carefully curate my feed and use mute and so on and my feed has very little junk in it. Mostly family stuff and jokes/memes, with some ads for stuff that actually does interest me. It's not hard. Most of us here solve harder problems every day.
> If employees have an ethical issue with working there, why don't they quit?
Always super easy to say for someone else. Why haven't most of the people in the US government quit by now? In fact, insecurity about getting an equivalent job is a real thing. Not wanting to abandon a project you've worked on and expertise you've developed is a thing. Not wanting to abandon people you've worked with is a thing. Thinking you can do more good from within is a thing.
Even for those who consider quitting in protest, waiting until just after the next bonus/vesting day seems like a reasonable concession to other psychological or financial needs. Then that date rolls around, and all of that inertia is still there. Expecting others to make a hard decision that you've never been in a position to make yourself seems unrealistic and unhelpful.
[ETA: I don't mean to say this particular person has never been in that position. I don't know them. I've even forgotten their name. But I've seen this sentiment expressed many times and it always seems facile.]
agreed with this. for me personally, leaving FB was a confluence of directional concerns which arose for me in 2019 (as described in the leaked internal post), with a concomitant desire to spend my time on new, more human purposes. this was balanced against me enjoying my time working with the remarkable people on my team, a few logistical considerations, a surprise global pandemic, and also years of trust and good faith that FB had earned from me as both employer and societal phenomenon. i spent a little time scoping out stuff within FB as well (it's a big place), but factoring that out it still took me almost a year. the inertia is clearly there.
but i consider myself lucky! there are many people who experience all sorts of work-related insecurity with real gravitas. folks on work visas. folks paying off student loans or other debt. folks who had been working much worse tech jobs before and feel their FB offer was a lucky break. also the many folks at FB not on the tech side who don't earn stereotypical tech comp.
i have no idea how many people there are who satisfy these conditions or similar ones, but my guess is that if you're ever thinking to yourself, "why don't more people just quit?" it's because you're underestimating the scale of folks in that group.
It took him 9 years of collecting a fat paycheck and enabling Facebook to find his moral compass.
Is it wrong to be skeptical of these millionaire developers who ride off into the sunset to retire but leave a final "oh btw, lots of problematic stuff at my former employer" message?
It's not wrong to be skeptical. I do think there's a sort of boiling frog effect, though. Internal company culture never matches public perception, and you're always able to tell yourself a story that the public misunderstands what drives the company. After all, you hang out with your colleagues and you know they aren't bad people. But then some big controversy snaps you out of it and you realize that the company's internal collective fiction is no more accurate than the public perception, and perhaps you are the baddies after all.
hello, i'm the ex-employee in question and can speak authoritatively on the subject of myself :)
i joined FB for the first time as an intern almost a decade ago. i suspect i have a substantively different view of the company and the people who built it—even compared to many other employees, and certainly compared to folks who have not been in the ~room where it happens~ (heh).
even accounting for some amount of insider bias, i think there's still a material discrepancy between how the public viewed certain major FB "scandals"—via the lens of media spin and profitable reporting—and how many folks like myself, who were privy to additional context and private information, viewed them. to some people, the recent discord around hate moderation might seem like just more of the same FB badstuff. not so for me!
even folks at FB who don't work directly on certain products and policies are often immersed in discussion about them (unless they aggressively unsubscribe). these discussions, again, have their bias, but i hope we can concede that it's still a lot of passive brain cycles being spent swimming among these topics. folks develop deeper intuitions for how troubling X or Y publicized issue actually is, relative to all the things (of all flavors) that happen at FB and among its users. there's also a ton more discussion of legitimately positive societal work and how to extend those successes, whereas those rarely make good media narratives.
the good doesn't cancel out the bad—that's never how it works—but you do have to consider both together at every decision point.
in the leaked audio of my internal video post, i say that my long road to the door started in 2019. i still stand by that inflection point, and feel that before the, FB—while clearly on the back foot for some time due to rampant abuses by powerful groups of its product—was on balance moving in a direction i supported.
you may have made a different choice—or you may feel you would make a different choice, though you may not have the full slate of inputs right now to be sure you would, in situ. but, at least for me, i have reasons why the questions around the 2016 election or fake news or data breaches were things which, to me, seemed categorically different from the questions concerning hate moderation.
feel free to DM any earnest questions about why these things don't all congeal into one mass of problem for me (and i suspect other current and former FBers).
I'm still of the opinion that you chose personal gain over doing what you knew to be the right thing. It's something everyone (me included) does many times over. But the scale of wrongdoing by Facebook is exceptional.
Equally as possible is that our intuitions about "the right thing" are wildly different. I think that democracy, societal cohesion, and personal privacy are important and that Facebook has permanently damaged all three.
i think you may be setting up a few false dichotomies here.
> the scale of wrongdoing by Facebook is exceptional
absolutely—but i think you (and certainly, many many people in the world) may be missing that this is in large part due to facebook's scale, period, not any specific wrongdoing-at-scale. facebook is enormous and has enormous impact. enormous quantities of good and bad things happen on facebook and through facebook every day. it's not enough to /just/ point to its "scale of wrongdoing" to say that it's "wrong" to associate with it. every government in the world causes harm at scale. should we embrace true anarchy? are we all culpable for participating in society? i mean, maybe yes to that last question; but it's not a very interesting answer, possibly because it's not a very interesting question.
i think you have to consider things holistically. facebook does harm—that's bad! does it do good at scale, also? how much of the harm is facebook's "responsibility", at least from the perspective of assigning moral culpability? (b/c obviously it's better to treat all fouls as your responsibility, because you control only your own actions.)
consider the 2016 election mess on social media. do we pin the blame squarely on social media here? if so, why? i think social media was caught off-guard. social media started its life as small circles and communities, grew into a media and meme distribution platform, and then somehow became hijacked by powerful entities fueled by state money (including states themselves) as a dezinformatsiya and propaganda side hustle. was it FB's responsibility to predict and prevent that, even when so few others did or could? if so—are those other people and platforms not morally culpable for not making enough noise or action? are the state actors themselves not morally culpable for their atrocious agency?
in late 2016 and early 2017, i read lots of opinions and articles on the NYT about how FB didn't defend against disinformation and propped up negative political content. why did the NYT write so few opinions and articles about how the NYT published story after story of tabloidal but-her-emails drama, or how the NYT gave the trump sitcom team so much free press?
facebook is very big and powerful but it is not beyond exploitation. should we hold it morally culpable for being exploited in these ways? facebook is not only the messenger, but a lot of the times it is, and perhaps we should not be so quick to proverbially shoot it.
> I think that democracy, societal cohesion, and personal privacy are important and that Facebook has permanently damaged all three
dovetailing off the above: are the instruments of the state not causing this same damage? are traditional media conglomerates not causing this same damage? are political entities accepting massive cash injections not causing this same damage? are certain social institutions like megachurches not causing this same damage? are social norms sculpted by late capitalism not causing this same damage?
do you curse your server for bringing you bad food?
this question is rhetorical. the answer is obviously yes, people do that all the time, but like, maybe they shouldn't.
so for me the question is, how much of this damage do i feel that facebook is "immediately" responsible for, versus "secondarily" responsible for. i brought up the 2016 election above. my view is that FB was in large part taken by surprise and exploited in the course of those events.
obviously, i have insider bias here. but much as both-sides-ing every single argument is not actually a "neutral" position, neither is the position of being on the "outside" and not having insider bias. neutrality is a fiction; there's nothing truthier about being on the insider vs. on the outside.
but my position gave me (and other FBers) access to information about motion, decisions, and human actions which informed my evaluation of culpability. much of the "scandals" in FB-the-darling-child-of-the-media's history had a similar feel. facebook had some posture, it largely worked alright and allowed for good or neutral stuff to happen, conditions changed, "small" (but still at-scale) but high-profile harm was incurred due to some bad faith actors, and facebook responded. not interesting to me, someone who watches this process happen literally every day.
employees didn't have trust in FB because they were given literal kool-aid to drink; it's because they had extra information and sometimes it led to obvious conclusions that are utterly non-obvious without that information.
but i think facebook's response to hate moderation has a different texture. it's had years to adapt to the new reality of constant assault from political forces (though i would never, ever expect or want perfection here). but rather than pushing back—as it did in pretty much all past privacy or data breaches—it seems to be adjusting to explicitly allow for and tolerate some of this behavior which i consider "bad".
why i stayed at FB past the moment at which i developed this concrete concern is for entirely selfish reasons. but don't conflate that inflection point with the whole history of FB's narrative, as told by the media. you may feel the whole story has a uniform texture, but i don't, and i suspect other FBers do not, and i suspect it's because there are good reasons to feel that way.
Maybe, I guess it's hard to know without knowing their retention metrics. As one anecdote, I was in the negotiation phase of an Oculus offer earlier this year and turned it down the same weekend of that infamous post[1] and I know a few people in my network whose employment status has quietly changed away from Facebook.
[1] Less because I was certain they should moderate it, and more because I thought it was weak how Zuckerberg threw Jack Dorsey under the bus, and how cynically the company steers the conversation to free speech to avoid taking responsibility for the adverse effects of product decisions. artfulhippo made the point I'm trying to make up-thread: https://news.ycombinator.com/item?id=23929769
FB employees launched very strong virtual protest in recent times. Some had even skipped dessert after their meals. I do not know what further moral courage one can show beyond this.
> People working for tabacco companies have always been in the same boat.
That's not a very fair comparison - there aren't exactly any huge positives associated with tobacco use. So it's not like someone could work for a tobacco company and say "sure some people die from consuming our product but on the other hand look at all the good we're doing..."
That's very different than Facebook where yes there may be some negatives but there's also positives (e.g. it's certainly raised countless millions for various charities, helped people keep in touch with others and discover (or rediscover) relationships, and spread the awareness of important social issues).
The positives you listed are features of most social networks, not just Facebook. If Facebook disappeared overnight, another platform(s), hopefully more ethical, would quickly take over its role, and it could be a net positive for the world.
That seems very optimistic to me. Ethically, Facebook is the perfect exemplar of an internet-oriented company. No better, perhaps, but no worse either because it's such a low bar. It seems at least as likely that any replacement would turn out to be even scummier, simply because it would absolutely have to hire from the same talent pool and would have had even less time to figure out what all the ethical dangers are.
But it's simply not going to happen, so this is all moot.
Majority of people stay there. It's likely for one of 2 reasons:
1. They're morally bankrupt and prefer $
2. This is very complex issue, and there's no simple right or wrong solution.
Facebook's famously tight-knit and tight-lipped engineering organization has become increasingly leaky in the past couple of years, as posts and data make it to the press with surprising regularity.
What changed? Is it morale? Or something that happens at scale? Does the rate of leaks say anything about the health of the company?
Talk to a reporter about why people leak. The reason people leak is when they feel that the organization doesn't listen to them and they can't steer the organization any other way. If your employees feel respected and listened to- and that their internal warnings are being heeded- they won't leak to an outsider. If they feel that they are a Cassandra, doomed to be correct but ignored, they will leak.
Now sometimes workers are wrong- the problem they were obsessed with wasn't actually that serious, their boss did take their advice into account and decided to resolve it a different way, etc., but in general, lack of respect is why people leak.
The same thing that happened at the other tech companies. A newer, younger generation of employees joined. The Gen Z cohort seems to be much more ready to bring politics and ideological wars into the workplace, and really view any institution they're a part of as an opportunity to weaponize that institution. This culture had already taken root on college campuses, and my belief is that they view that as the 'normal' way of operating, including making purposeful leaks. I know this is anecdotal and speculative but I am offering it up as an explanation nevertheless, and would love to hear other theories on what changed.
I do not believe that the change is simply due to the companies growing bigger. They already were big in prior years and had gone through phases of fast growth in employee count previously. Something is different this time, which is why I am pointing to a generational change.
> "Yaël Eisenstat, Facebook's former election ads integrity lead, said the employees’ concerns reflect her experience at the company, which she believes is on a dangerous path heading into the election.
> “All of these steps are leading up to a situation where, come November, a portion of Facebook users will not trust the outcome of the election because they have been bombarded with messages on Facebook preparing them to not trust it,” she told BuzzFeed News.
> She said the company’s policy team in Washington, DC, led by Joel Kaplan, sought to unduly influence decisions made by her team, and the company’s recent failure to take appropriate action on posts from President Trump shows employees are right to be upset and concerned."
This is pretty damning stuff, coming from named, authoritative sources inside the company. I'm hopeful that the recent advertiser boycott will help shift priorities with leadership at the company, but I'm not holding my breath.
Advertiser boycott is a bit limited in effectiveness as FB's revenues come from its long tail, and direct to consumer brands are snapping inventory up.
To be clear, Eisenstat's concern here is that Facebook isn't exercising enough editorial control over the content it allows. She argues that Facebook ought to remove content which upsets its employees, its advertisers, or the civil rights community.
Does the Big Bang Theory encourage violence against creationists?
I don’t ask that question seriously, just to point out that generalising things isn’t always useful. For example, “hate speech” is a defined term and the Big Bang theory ain’t it.
I don't think "encourages violence" is the standard that's being applied here. For example, on the 14th of June 2017, a political activist who'd been radicalized by Facebook made a list of elected members of Congress, went up to them, and asked them about their party affiliation before opening fire on them. There were zero mainstream calls to for Facebook to crack down on the communities or content that radicalized him, and this incident is almost completely absent from the narrative about the dangers of Facebook. Not only that, respectable mainstream media organisations like the New York Times falsely claimed that actually, the party that had been targetted was the one whose rhetoric was causing members of Congress to be shot, erroneously blaming the shooting of Gabby Giffords on them when in reality it seems to have been inspired by anger at her specifically that had nothing to do with national partisan politics at all.
> Does the Big Bang Theory encourage violence against creationists?
Excellent point. There's much broader support for banning advocacy of violence, than for banning statements that offend the audience's religious sensibilities.
Perhaps a better example is speech that advocates abortion rights. Pro-life advocates consider such speech to incite murder; pro-abortion advocates don't.
I think this adds an interesting wrinkle to the hate-speech / censorship debate: It shows that even meta-rules meant to keep a discussion civil (i.e., we won't allow speech that advocates violence) aren't necessarily neutral to the viewpoints being discussed.
In one of the controversies discussed in the article, Facebook banned an ad on the grounds that an upside-down red triangle constitutes hate speech - it's "a triangle symbol used by Nazis to identify political prisoners", you see. So I'm not sure the concept is quite as well-defined as you're suggesting.
Except that is defined, that's a thing that's real, and the upside down red triangle specifically was used to identify political prisoners of the Nazi party, like liberals, socialists, and unionized laborers.[1]
I guess I'm not sure of your point, because to me, the use of the symbol, in a political context, and especially in the context of a polemic populist political campaign, is problematic, regardless of whether or not the Trump campaign or whoever backed that advertisement knew what it was.
The question isn't whether it's problematic in some generic sense, but whether it's hate speech or a call to violence. I'm extraordinarily skeptical that anyone sees a red triangle and thinks "ah, I understand, the triangle is telling me I should go engage in political violence".
This is extremely predictable though, because a sizable segment of Facebook users did not trust the outcome of the 2016 election. Why would it be any different in 2020?
People trust the outcome generally, whether they like it or hate it. The folks aren't trusting the inputs.
2020 is not 2016. Polling is suggesting a large shift. That may cause distrust in the outcome itself if the outcome doesn't align to polling due to voter suppression efforts and other possible issues.
Is the polling really all that different from this point in 2016? As I recall, in 2016, the eventual winner was way down in the polls right up until election day.
By the time of the election, the polls had narrowed considerably. There were a lot of clowns with predictions of 99% chances and whatnot, but people that actually understood statistics and polls had the race a lot closer to a coin flip at the time of the election (IIRC 60-40 odds).
EDIT: I would point out that this means that every four years, about 25% of the population picks the president. Ideally, we would be somewhere north of 40%. This bare majority participation occurs admist one of the biggest extended and hyped media events in the world. Other countries routinely get above 60-70% participation with far less nonsense.
I live in California. My preferred candidates win the vote almost 100% of the time no matter if I vote or not.
That's true in many, many states.
Not voting is pretty rational for most people. (I still vote anyway, mostly out of habit, even though I know it makes no earthly difference whatever to the outcome.)
> Maybe the electoral college isn't really that democratic
Presumably you know this, but it's not supposed to be democratic. That's literally why it exists, to curb majority rule by cities at the expense of rural voters, who occupy the majority of the land and—more importantly—are cannon-fodder for our military and thus need a seat at the table if we city-goers hope to continue sending them off to die.
> Presumably you know this, but it's not supposed to be democratic.
Yes, that's why it's bad.
I don't think occupying a lot of land or joining the army at a higher rate entitles rural voters to more power than anyone else. If we're going down the route of determining whose vote should count for more, those two qualities seem to be near the bottom of the list.
And yet, we wouldn't have formed a nation at all without that compromise. Still, it's hard to know if the founders made the right decision.
We could just go back on our agreement. After all, people in flyover country are just a minority with little economic power and very little representation. I doubt anyone that matters would care if they have what little power they do have reduced.
All of these presume low shifts in the Overton window, which may or may not currently hold true.
Further, 1/3 of the population voting may be "rational" (folks figure their individual votes don't sway things or are uninformed and don't care to take the time), may be due to suppression (gerrymandering, ID hurdles for non-evident potential problems), or other factors.
You're right that anything can happen in this election, but reduced participation is a long term trend.
If people think their votes don't matter in an alleged democratic system, that's a huge problem for its legitimacy. If they're uninformed, it's because they probably think that their vote doesn't matter or there's no good information for them. Both are troubling.
But wouldn't these people still exist and not trust the outcome regardless? I wonder what the media consumption of these individuals looks like beyond Facebook.
It's pretty damning, but not in the direction the article wants it to be. Facebook is a very powerful company, so it's more shocking that it can be rocked by politically motivated hysteria over a "https://emojipedia.org/red-triangle-pointed-up/" symbol (the actual subject of most of this article) than that it took an extra few days to delete a highly-debatable usage of that symbol by the US president.
Just to be clear, by "shifting priorities," you mean that you hope the advertiser boycott causes Facebook employees to editorialize posts from politicians and world leaders?
Wait, you think a social media company should be in the business of regulating what politicians or people can say about politics? The damning thing is that people believe a corporation should be in charge of deciding acceptable parameters for speech. Some part of this article is spin. You can take a Trump tweet and spin it in a really horrific way if you like. Is Trump calling for laws to be enforced against property destruction and violent rioters? Or is Trump calling for violence against peaceful protesters? It's a matter of perspective and therein lies the rub. You want censors to wade into these grey areas and declare one perspective to be the one true viewpoint. The real world does not work this way, and what I'm surprised at is the lack of recognition and the overall immaturity at tech companies and perhaps modern society in general that this is the case. Everybody thinks of themselves as the good guys. That is part of the human condition.
Facebook's corporate spin on this has been that all calls for them to improve behavior come down to calls for moderation, and I've been a bit sad to see HN (not you specifically, but in all of these discussions) buying into that spin.
There are plenty of ways Facebook could improve that are not moderation, starting with removing the incentives that make the most divisive/controversial speech have the furthest reach.
Social media, and all of the "talking heads" on network news and talk radio, are basically in an arms race, just like advertisers have been for years. For instance, Coke can't stop advertising because then Pepsi or another brand will take over, i.e., they have to keep it up. We are now at that same point with politics: everyone has to keep it up, and even more so; constantly upping the ante. Facebook is the dominant platform for upping the ante, which gives them money, and more importantly, power.
There seems to be a growing consensus here on HN that Facebook's ad-based revenue model is a source issue for why extremity, polarization, toxicity, and usage addition is occurring on the platform. All of these elements drive up engagement. The more engagement you have, the more profit the company makes. It's a feedback loop that's helping the company and shareholders, but at the cost of the stability of society at large. So here's one idea to address the problem that I'd like to get other's thoughts on. What if we declare ad-based revenue streams for social media platforms a form of market failure? What if we made it the law that social networks must charge a premium, a subscription-model if you will, to use their services. "Want to use Facebook? You'll need to subscribe for $2/month". What do you think the experience of using Facebook would be like then and, to an extent, society at large? It has been my experience that services I must subscribe to do a better job of respecting my data, privacy, are less toxic, have higher quality content, and are less addicting. What I am suggesting here is that the desired behavior can be directed through proper incentives.
it took Covid for me to really believe how dangerous social media has become to society. it really has made people more divisive and angry and conspiratorial.
didn’t really buy it when the Russia bot thing happened but now I do 100%
It's possible to be an employee at what you think is an immoral company but be doing some innocuous that's just a part of a bigger whole. Does improving an internal observability tool as your job make you just as culpable as Mark? At what point do your actions become immoral? Just working for the company? Building anything for the company while an employee? Maybe you have to be the last step implementor of some bad policy before the immorality attaches. I don't really know.
But the answers to these questions aren't clear and up for debate. What is harder to argue is whether the person directing everything and with a bird's eye view of the direction of the company is culpable for that company's misdeeds. I think this makes it more clear to blame Mark than "all the other employees."
sounds like you should cease participation in capitalism and also society then. i know a guy popping out of a well who would like to have a word with you!
you might not like the messy reality, but there are lots of my former coworkers who feel inclined to stick around because of their work visa, or to pay off their loans, or because they felt their FB offer was a lucky break and they want to best support their kids.
if you're not responsible for all violence or oppression in your nation as a citizen, trust me—folks employed by a given employer are likewise not wholly culpable for the actions of their companies.
The article provides several examples in which many employees have flagged certain content but that Zuck, who was the final decision maker, did not agree or follow through on those flags. As long as he's the one making the final call, he deserves more of the blame.
lol what circles do you travel in? i was on a call last week where this guy that works for DOD took issue with me working at Facebook. the irony is quite delicious there.
FB's religion around AB testing for engagement and their views on free speech (which are mostly good) creates a complicated product that brings out much of the worst in people. They need to reevaluate their engagement at all costs philosophy.
I used to have a Facebook account and deleted it in 2009-2010. This is just me brainstorming, but would it make sense to have to submit a photocopy of your ID card to create a real name account for such a huge company, for the sake of creating something of a safe online forum/community? I would think people would be more responsible with what they post. I had done so for OfferUp, which to me, makes sense because people need to have a sense of safety and security when interacting with each other. I'd like to hear what others think.
Years ago, Facebook for some reason thought I was a bot and disabled my account. They required that I upload a government issued ID to verify I am a real person. I was finding Facebook to be detrimental to my health at that time, so I felt it a perfect opportunity to leave the platform all together. Plus I didn't exactly feel comfortable giving them that kind of information.
Brazilian government actually wanted to do it recently, require companies such as Facebook to require that you submit your ID and that you link a phone number to your account. [1] It sure makes things easier for the authorities, since they are also dealing with fake news and hate speech. However, it goes against the LGPD (like a Brazilian version of the GDPR). And it is hard to trust a company such as Facebook to keep data such as your ID. How can you be sure if they're keeping this data safe? How can you be sure they won't sell it?
The thing that annoys the living shit out of me are two fold:
1) the content policy for the public is clear, well thought out and easy to understand. but that doesn't apply to politicians. The rules that apply to politicians is hidden, opaque and taken on the fly.
2) the moral compass of the leadership team is fundamentally blinkered and naive. They do not understand or seem to want to understand viewpoints of other people. Unless you are posh & rich and upper middle class, you have to battle really hard to get your point across.
I have the impression we invented that Big Machine that drove the Krell to extinction in Forbidden Planet. It's just so that's far less spectacular than in the movie.
> "White men are so fragile," she fired off, sharing William's post with her friends, "and the mere presence of a black person challenges every single thing in them."
> It took just 15 minutes for Facebook to delete her post for violating its community standards for hate speech.
I'm pretty sure that comments like that don't count as "anti-racism".
I'm pretty sure WeChat has many revenue streams that don't depend on engagement, thus they have no benefit in encouraging toxic behavior. The main problem with toxicity is not just the people but that the mainstream platforms are built to amplify the worst out of people because that's how they make their money.
Obviously there's less open dialogue about it, but I think it's fair to say that it must. The CCP censors it very heavily, sometimes explicitly stating they're trying to prevent rumors and misinformation.
Maintaining societal cohesion is the justification for a great many of PRC's actions (I would say 'evils', but that's my judgement), including the Great Firewall, extermination of Uyghurs, etc. It is fundamentally a waste of time to compare that society to America's melting pot on the level we're discussing.
I've found it even of little use to bring up frameworks of other Western countries. I once got a lot of angry replies because I brought up the German approach of mostly just tackling hate speech and having independent non-profit fact-checking organisations label and correct posts, which is about as non-invasive as it gets while trying to deal with the problem, still got a ton of responses about censorship and free speech.
The US is basically free-speech fundamentalist. Expression trumps truth, with that view in mind you by definition can't deal with the problem.
Those concepts of "truth" and "fact" may feel like absolutes but are a product of the observer. When people with similar beliefs/cultures/backgrounds/goals observe the same thing then it's easy to agree on what's true and what's not true. In general I think it's not a good look to tell somebody that the world they perceive is untrue, and that's why I support free speech as basic a human right :)
we're not talking about some highbrow philosophical disagreement or argument about cultural values, we're talking about very basic facts, which have the benefit of actually having truth value regardless of whatever people think.
The pandemic is a pretty good example. There are actually a lot of protocols and behaviours that have shown to make it so hundreds of thousands of people don't have to die regardless of what timezone or nation or religion or political party you're in, yet aided by constant social media barrage the US is so divided it can't even manage to enforce that.
I have been wondering why Facebook has become the only platform that tolerates right-wing speech. The other platforms took the convenient route and kicked out right-wing voices.
Why exactly is Facebook risking both bad press and a revolt from its overwhelmingly progressive employees?
Facebook can afford to do so because they have higher margins allowing for more experimentation/risk-taking. Businesses in more competitive spaces are unable to take tough stands.
The New York Times cannot be the New York Times of New York Times v Sullivan anymore, because their ad monopoly has been decimated. But Facebook can be the NYT of NYT v Sullivan, because they have 40% margins on their ad business.
One thing I have been wondering is: why is Google so lacking in conviction?
Twitter has a more left leaning audience than many social media platforms, hence its creep away from tolerating anything right wing or centrist.
Facebook's audience is predominantly more balanced in that regard, with a larger percentage of conservatives using it in some way (especially those in older generations).
If they tried to go the same way as Twitter or Reddit, a large percentage of Facebook users would be absolutely pissed, and they'd likely lose a lot of traffic.
I think you're confusing conservative views with hateful ones. Zuckerberg makes the same mistake and it's discussed in the article:
>“He [Zuckerberg] uses ‘diverse perspective’ as essentially a cover for right-wing thinking when the real problem is dangerous ideologies,” Brandi Collins-Dexter, a senior campaign director at Color of Change, told BuzzFeed News after reading excerpts of Zuckerberg’s comments. “If you are conflating conservatives with white nationalists, that seems like a far deeper problem because that’s what we’re talking about. We’re talking about hate groups and really specific dangerous ideologies and behavior.”
Other platforms has plenty of voices from all parts of the spectrum. The difference is that they might actually be making a more sincere attempt at moderating.
> a post from President Donald Trump that seemingly called for violence against people protesting the police killing of George Floyd.
This seems very unlikely, and there's no link or screenshot of the actual post to support the statement. Twitter has previously taken statements that laws will be enforced as 'calls to violence' so it wouldn't be surprising if the same thing is occurring here.
The steel-man argument is that the President's statements practically amount to an indirect call for violence. There's no other way for the state to enforce laws except to use the messy apparatus of the police, which may result in collateral damage/violence.
The truly steel-man assessment of the President's statement of "when the looting starts, the shooting starts" is that he meant it empirically, not normatively -- describing what inevitably occurs in an environment of sustained looting (shootings committed by people guarding property, looters, and law enforcement).
If you want to be extra charitable, he possibly made that statement empirically to describe a tragic situation he wanted to prevent entirely by dispersing protests early. Such a justification and reasoning would be very authoritarian and is arguably wrong in efficacy, but that interpretation does not involve an indirect or direct threat of shooting.
My grandmother (an English teacher) once castigated email and predicted it would condition people at all levels to not be precise in language, and warned of how communications and ideas would degrade as a result. As a young nerd I thought that was absurd and the opposite would be true, but now seeing Twitter I have changed my mind.
My steel-man was an attempt to justify the deletion of Trump's post: by making the best possible case that it's actually a normative call to violence (even if indirect).
> The steel-man argument is that the President's statements practically amount to an indirect call for violence. There's no other way for the state to enforce laws except to use the messy apparatus of the police, which may result in collateral damage/violence.
Which would mean that literally any discussion of government policy which could result in law enforcement is a call for violence. While I don't particularly disagree with this stance, and Libertarians in general would be overjoyed to see this shift in perspective, I don't expect to see social media companies ban all discussion of political policy.
e.g. "Deadbeat fathers should have to pay child support". If you fail to pay child support you will eventually be arrested. If you refuse to comply with arresting officers, they will use force against you. Therefore calling for child support is an indirect call for violence.
I actually don't think there's that much wrong with what the President tweeted/posted, but I do think it's inaccurate to compare what he said to just any old discussion of government policy. He wasn't just discussing government policy, he was talking about enforcing it in a very particular way ("the shooting starts").
There is definitely a qualitative difference between:
"Deadbeat fathers should have to pay child support"
and
"When the looting starts, the shooting starts"
I personally disagree that the difference matters, but you have to accept that it's certainly debatable.
Yeah it's definitely legal, I don't think that's the issue. The issue is whether advocating for the legal use of force constitutes "inciting violence", which is against FB and Twitter's ToS.
The central debate is: should the ToS disallow incitements of violence, even if it's actually legal?
Oh, absolutely. I am just following that argument to its logical conclusion and was intentionally pointing out that it doesn't make a lot of sense.
I think you were pretty close in your summary down-thread:
> The issue is whether advocating for the legal use of force constitutes "inciting violence" ...
> The central debate is: should the ToS disallow incitements of violence, even if it's actually legal?
In reality, most of these arguments are working backwards to justify the conclusions they want. Would anyone argue that it should be against the terms of service to say "A woman should be able to fight to defend herself against a would-be sexual-assailant"? Unlikely, because almost anyone is going to think that that is justified (I hope). That is inciting a legal use of violence.
People arguing against the President Trump's statements just don't like what he had to say, whether it's legal or not has nothing to do with their opinions. It's simply that they don't think it's justified, and that's fine. It's just more difficult to get something censored on the grounds of "I disagree with this and don't like it".
It's true that the people arguing against the President's statements aren't necessarily arguing from a principled stance, but that's true of any group that uses the levers of regulation. The Terms of Service, to a company, is sort of like laws to a state. Laws are important because they are instruments that can be used by everyone, and the government in theory ought to consistently apply them.
Take anti-discrimination laws, for example: they were passed in the 60's to combat what was then rampant discrimination against Black Americans by private establishments, however because the rules are generally pretty unambiguous, they can be used today by white people if they are being institutionally discriminated against, for whatever reason. But the group of people that would have utilized the levers of that law to combat discrimination against Black Americans will mostly not be the same group of people that might use the same law to combat any discrimination that might occur against white Americans — Black Americans might even accept discrimination against white Americans and vice versa (we see this happen today). To the law, it doesn't matter — Black Americans can utilize it for their own ends, and so too can white Americans — and as a result we ideally have NO discrimination against ANYONE. It doesn't matter what each group's motivations are.
That's how Terms of Services should (ideally) work, also. Just because the reality today is that one group is using the ToS and working backwards to justify some conclusion, the same levers exist for all other possible groups, and we hopefully reach an equilibrium where the Terms of Services are consistently applied. Unfortunately that appears to be blind idealism because — as we are seeing — companies don't necessarily apply their ToS consistently.
However, in this case, I don't think it's that Facebook doesn't want to apply their ToS consistently. I believe it's that activist employees, and opportunistic companies feigning boycotts, don't want them applied consistently.
If the screenshot you're referring to is "when the looting starts, the shooting starts" (oddly this isn't placed next to the assertion of threat probably to avoid the comparison):
a) this is not directed at George Floyd protestors as the article suggests, but at looters
b) stating that violence with be responded to with force is a factual statement, and that response is entirely within the remit of law enforcement. The post does not encourage an extra legal or illegal response
c) it is unlikely the BuzzFeed would allege threat for any left leaning world leader suggesting that violence would be responded to by law enforcement.
We should be auto flagging BuzzFeed articles by now.
Regardless of whether you like Donald Trump or do not - and I do not - that doesn't change the facts of this matter.
"If a man is not a Socialist at 20 be has no heart, but if he remains one at 30 he has no head." I don't know who said this, but this is a great example of how our minds mature.
The probably unintentional ageism in tech has a major, quite literally civilization changing consequence.
We are creating power centers which have too many young people who haven't really thought a lot of things through, certainly not the unintended economic consequences, and who glibly dismiss people who disagree with their premises as some kind of "ism".
That quote is just a way for people who have grown more conservative than they were in youth to excuse their cognitive dissonance. It's not some great truth.
That’s a crass dismissal of the people involved here and socialism. You think that the problem really is that everyone at Facebook is too young and is thus more inclined toward recklessness (somehow involving socialism)?
PS: Would you be shocked to learn that some become socialists after age 30?
Social medias have been silencing every conservative opinion on their platforms, because most of their employees can't stand different opinions. These people are unmanageable and want to make activism using a platform they don't own. At the same time they look the other way when it comes to progressive comments. If he doesn't like different opinions, intrinsic to democracy, maybe we should ask Mr Wang's parents if they miss China.
Turning this into a nationalistic slur and personal attack is no way to make your case. It also breaks the site guidelines and will get you banned here, so please don't go that route.