> nobody actually wanted to foster such toxic behavior
I think no one realized you could make disgusting amounts of money by fostering it. There have been plenty of flame wars on BBSes and forums, but the tension between "engagement" and "quality" always favored the latter in small, private communities (HN is a good example). However, when it comes to Facebook and Twitter, the former is always favored (due to market forces, shareholder interests, etc.).
Observe flamewars on BBSs and forums tended to drive people away, thus tending to confine them through natural mechanisms. Our current systems encourage and fan them, and by profiting from them, create a system that sustains them indefinitely instead of naturally isolating them to just the people who actively want to participate.
As someone who has a low-grade hobby of kind of keeping track of this sort of thing, I think one of the major challenges of trying to structure communities is all the most powerful drivers of what is going to happen occurs in these low-level decisions that create persistent second-order effects on the community's nature, and this is one of the somewhat unusual cases where the second-order effects utterly dominate the first-order effects. In this case, you can read "first-order effects" as "the things the developers intended to happen", which I think are dominated by the structural effects of decisions that aren't intentional.
If you create a community that literally profits from conflict and flamewar, you'll never be able to fix them by any number of systems you try to add to fix that via first-order effects. The underlying structure is too powerful. Facebook can't be fixed with any amount of "moderation"; the entire structure, from the very foundation, is incapable of supporting a friendly community at scale. Until Facebook stops profiting from engagement, they will never be able to fix their problems with toxicity, no matter how many resources they pour into naively fighting it via direct fixups.
(Now, I have questions about whether a friendly community of the scale of Facebook is even possible: https://hackernews.hn/item?id=20146868 . But the fact it may not even be possible does not mean that Facebook is still obviously not that solution.)
I'm always reminded of this quote from Larry Wall, the creator of Perl:
"The social dynamics of the net are a direct consequence of the fact that nobody has yet developed a Remote Strangulation Protocol."
It's tough to get all this right. Humans are ok at dealing with one another in person most of the time, but being behind a screen really doesn't happen in many cases.
I have no idea what the solution is... it's a people problem, so probably not a strictly technological solution.
HN works fairly well because of the hard work of the moderators.
FB is kind of maddening to me. You can't just put it in a 'good' or 'bad' bucket.
Some things I get a lot of value out of:
* Being able to keep track of acquaintances from all the places I've lived. There are a lot of friends and family I have in Italy that I can't see often, and I do enjoy hearing what they're up to.
* As a tool for organizing it's been a very handy, low-friction way to get people involved in some political issues where I live in Oregon.
On the other hand, lately it has also been a source of stress. The sheer amount of anti-science, poorly thought out political comments and plain hatred is really depressing at a time when a lot of things are not going well.
I dislike this trend (especially in Silicon Valley) to blame problems on the users - e.g. creating a startup, and becoming frustrated with users when people use it "incorrectly." Technology is supposed to be used by people, not the other way around. When technology is using people for its own interest (in this case, ad revenue), then we have a real problem with the technology, and it is absolutely not a people problem.
Only 80 years ago, plain images on posters could be used to motivate people to die for their country in World War II. 400 years ago, images were so rare that it was enough to paint church walls with them to fill people with belief in God and afterlife. And as of the last ten years, we're suddenly expecting people to drop their belief in images and use their "rational" logic to see through fallacies, saying it's a "people problem" when they can't? It's just too fast for evolution, and the onus is on the ones who create the technology that disseminates images to be careful, lest they create the perfect conditions for a society to fall apart because they were too busy looking out for their bottom line.
>The sheer amount of anti-science, poorly thought out political comments and plain hatred is really depressing at a time when a lot of things are not going well.
On this front, I prune my contacts when my feed starts stressing me out. It used to be you had to totally unfriend someone, but Facebook wised up and now you have a variety of options. You can put them on a 30-day timeout so their posts won't show up on your feed while they get their rant on, or you can unfollow entirely while remaining friends (so you can still actively check on them but won't get passively bombarded with dumb stuff). You can also opt out of seeing content from specific sources they share if the only problem is they're sharing dumb links.
It's still not perfect, but keeping in touch with people who post dumb stuff is always gonna be a balancing act and Facebook's come a long way in facilitating that act even though most of the options are not obvious (most of the above are found in the ellipsis icon in the upper right of every post).
"HN works fairly well because of the hard work of the moderators."
Moderators and size limits, the latter keeping the amount of work small enough that a couple of moderators can handle it, and aren't getting subjected to the sort of stuff Facebook moderators deal with. Obviously, not having images or video also helps that. (Though I recall some times when Slashdot trolls were taking some good swings at Can't Unsee even with those limits.)
HN is on the upper end of what a community structured in the way it is can handle, I think, and it has taken some tweaks such as hiding karma counts on comments. I'm not deeply in love with reddit-style unlimited upvote/downvote systems... in their defense, they do seem to scale to a larger system than a lot of alternatives, but it comes at a price. I do fully agree it tends to create groupthink in a community, as a structural effect, though I think that's both a positive and a negative, rather than a pure negative as some people suggest. Some aspects of "groupthink" become "community cohesion" when looked at from another point of view.
Never thought about it that way, but maybe that's why a reddit-style karma system does tend to hold relatively large communities together.
But even as one of the more scalable known systems, it still breaks down long before you hit Facebook scales, or "default Reddit subreddit" scales.
> There are a lot of friends and family I have in Italy that I can't see often, and I do enjoy hearing what they're up to.
In the old days we had to actively do that using letters, or emails, or phone calls. I think it was a better system because it forced you to choose who you cared enough about to stay up-to-date with. Minimalism isn't just about things, it's also about relationships.
I only have so much time, and FB makes it easier to keep in touch with more people. Sure, I'll find the time for really good friends, but it's a benefit to be able to keep in touch with more people who I enjoy having in my life.
If I was king of Facebook (or a social media company that had its network) and I could change things for users without worrying about the company's revenue I'd do two things.
1. No links to outside content.
2. Mandatory deletion of all historical data with a max retention option no longer than 1 year with a default of 30 days. (Let users pick the 'fuse' length within this time frame).
I think that would double down on the things I like about it (keeping tracking of acquaintances like you mentioned, handling events, etc.) - while also removing a lot of the things I don't (arguing about news, targeted ads based on historical data).
That said I'd just like to make it easier for people to control their own nodes (https://zalberico.com/essay/2020/07/14/the-serfs-of-facebook...), but I also recognize that getting the social element to work in a federated way is not easy. Maybe Urbit will pull it off eventually.
I kinda wonder about just having a no politics policy. Allow user reports... If N users report a post for being political, or it triggers some regex, penalize the post in some way. Facebook doesn't have to be about politics; Instagram wasn't for a long time, but with the onset of the BLM movement, I've seen my timeline filled to the brim with politics, and at one point, people were even saying that no one was even allowed to post non-political content because it shows how privileged you are to be ignoring the movement. I don't use Facebook for the politics... I just want to know what's going on in my friends lives. You can still have engagement without politics.
While that sounds appealing, that just moves the goalposts. Now, who gets to define what's politics? Is mentioning global warming politics? Is advocating for wearing a mask during COVID-19 politics? And the meta-discussion of what content constitutes politics is also inherently political.
As for user reports. I would expect the same kind of dog piling you see now with people flagging people/brands/content they don't like politically as "politics". Post a picture of a The Origin of Species? Politics! Post a link to Chick-fil-a? Politics! Etc.
Ultimately, politics aren't the issue. Not having clear, consistent & enforced rules as well as no consequences for breaking them is the problem.
People aren't encouraged to think twice before they post because there's not going to be any significant consequences for breaking the rules.
Even if you somehow manage to get permanently banned from a social network, it's very easy to come back; it doesn't cost anything besides spending some time creating a new account.
From a business perspective it makes sense - why would you ban an abusive user that makes you money? Just give them a slap on the wrist to pretend that you want to discourage bad behavior and keep collecting their money.
Proper enforcement of the rules with significant consequences when broken (losing the account, and new accounts cost $$$ to register) would discourage a lot of bad behavior to begin with.
You could then introduce a karma/reputation system to 1) attach even more value to accounts (you wouldn't want to lose an account it took you years to level up and gain access to exclusive privileges) and 2) allow "trusted" users beyond a certain reputation level to participate in moderation, prioritizing reports from those people and automatically hiding content reported by them pending human review (with appropriate sanctions if the report was made in bad faith) to quickly take down offensive content.
you can't solve social problems with technology, only policy. companies like Facebook should be broken up and regulated such that the whole model of profiting from social division is removed. this would be highly beneficial to society and is an appropriate role of government.
I think no one realized you could make disgusting amounts of money by fostering it. There have been plenty of flame wars on BBSes and forums, but the tension between "engagement" and "quality" always favored the latter in small, private communities (HN is a good example). However, when it comes to Facebook and Twitter, the former is always favored (due to market forces, shareholder interests, etc.).