HN2new | past | comments | ask | show | jobs | submitlogin

Well, you can't completely understand the calls for breakup until you understand that the politicians don't really care about the traditional monopoly issues. They're upset that Facebook, Google, and Twitter have as much control over the political discourse as they do (note the missing Amazon in that list, modulo the fact that Amazon's CEO happens to own a major newspaper). If society is like a big collective brain as it thinks about the big issues, those companies have installed filters on a significant portion of the neural links and they aggressively use them.

I personally think this is a bad thing as well, and fully support breaking them up for this reason alone. In a perfect world, we'd carefully and thoughtfully craft new laws to address this matter; in reality, we're going to probably "creatively" read some existing laws in new and unpredictable ways to get some unprincipled compromise of what several groups want.

So I'm likely to be both unhappy at the theory and the practice of how they are actually broken up. Isn't government fun. But I do agree they need to be broken up.

This analysis is then useful as a conventional reading of anti-trust laws, but I fully expect that if a conventional reading of the laws won't get it done, then either new laws are going to be made or unconventional readings will be made.

I also think that these platforms are basically boned in terms of voters and public opinion. Consider two sets of forces on these platforms: First, the set of forces that want to insist that they carry any given piece of speech, and second, the forces that want to insist that they censor any given piece of speech. (Observe I'm not breaking this up on conventional political lines; all sides have their lists of both types of things they want.) I submit that there simply is no solution for a single universal platform that combines all of them. Almost all even remotely controversial speech is going to be on the mandatory list from one party and the banned list from another, which means that in the public opinion war, every political side is going to have an endless list of grievances against these platforms, literally no matter what they do. There is no solution for platforms at this scale. This is ultimately what is going to kill them; even if the government doesn't break them apart, something will.

In fact, in my first paragraph, when I said they "aggressively use" the filters, I'm not saying they even necessarily have a choice, or particularly accusing them of "incorrect" usage. I think at this point, they're simply at a size where they're boned no matter what they do with those filters. I do have my own list of particular localized grievances, but if you don't have one now, you probably will soon. The latest YouTube demonetization purge is hitting a lot of weird channels, including... a channel dedicated to making completely a-political cartoon parodies of the Final Fantasy games? Everyone's going to be pissed at something. I don't think there's ultimately a solution to "the one big site that everyone uses". Applications to the stock price of the relevant entities left as an exercise for the reader.



> They're upset that Facebook, Google, and Twitter have as much control over the political discourse as they do

This is the part that resonates with me the most. It's about power. The tech giants have amassed power in a way not seen before. It's different than Standard Oil and the Bell phone monopoly. On the balance, consumers aren't being harmed as they would in a typical antitrust scenario.


> On the balance, consumers aren't being harmed as they would in a typical antitrust scenario.

I would argue that they are. If private info is the new currency we're paying for access to these services, the price has been constantly increasing as more and more information is collected through tracking and "partnerships". This is one reason why it would be nice to put a dollar value on private data: to make this price increase obvious.

Real competition might look like delivering similar services without requiring as much tracking data to do so.


One of the problems with claiming that customers are being harmed by the privacy invasion is that the companies can demonstrate that they are only making a few bucks per year per customer on these "privacy invasions", making the most naturally legally defensible definition of the "damage" being done only a few bucks per year, per customer. The argument at that point that the customer receives far more value than that is very, very easy.

Showing that they do much, much more damage to privacy to make that several bucks is going to be legally challenging, and trying to convince a judge that the damage is yet higher because of my aforementioned nebulous concerns about "the social fabric being censored at the metaphorical-neural level" is going to fail unless you get a particularly activist judge, because they're not supposed to be ruling on the basis of such abstract concepts like that. (That's supposed to be what Congress is making laws based on.)

One of the challenges to my mind here is precisely that on the whole, individual customers aren't being harmed. On the whole, individual customers come out ahead. For most people, this is an acceptable bargain and they are substantially on the winning side of it. It is only once a single entity aggregates substantially the entire market that society and democracy starts to be substantially harmed, even as at every moment, there was never a transition point where individual customers changed to being harmed.


I think the issue of pricing is harder than that - damages and the ammount earned don't inherently line up.

If I sit at an intersection and record the cars passing and sell it to the municipal government for say $500 and I note 250 cars they didn't lose $2 each. Just because you can make money off of someone else /doesn't/ mean it is exploitation.

Conversely if someone broke a MRI Machine to steal the copper wire and sell it for $5 it doesn't mean the hospital suffered only that much in damages.


"I think the issue of pricing is harder than that - damages and the ammount earned don't inherently line up."

If you read again, you'll see I acknowledged that already. It's just that the easiest number to defend as the "damages" is the amount earned. But that number is so low that we're going to be trying to claim numbers literally 3 or 4 orders of magnitude higher. That's going to be much harder. Not necessarily impossible, but much harder.


> If private info is the new currency we're paying for access to these services

I would argue that it is not info that's the important issue here, it's share of attention, and the harm suffered by consumers is the unknown effects that is having on their personal wellbeing, as well as society overall.

Of course, there's nothing illegal about this, and forming any sort of a proof of harm would be incredibly difficult. I'm not sure what the proper path is from a legislation angle, but I believe if critical thinking was more valued and promoted, people would be much less vulnerable to harm.


More than that... they are effectively filtering what news and information you see in general terms. More so than even the mainstream media ever could. It's been pretty well established that the people in charge of these organizations have a political bias and are not only able to, but willing to actively shift the messages that people see.


The mainstream media decides what gets published before that content ever even reaches Facebook or Google. In terms of censorship power, the mainstream media is actually further upstream than social media and it can be argued it has more power to decide ultimately what gets published and what doesn’t, especially considering it purely uses human editorial judgment rather than algorithmic.


> I would argue that they are

I don't see the argument. You characterize trackable information as currency, which is a different argument. Information is traded, but is not currency. The market puts a value on data, which degrades IMMEDIATELY, until it's worthless to anyone (eg MySpace data cannot be monetized anymore).


> If society is like a big collective brain as it thinks about the big issues, those companies have installed filters on a significant portion of the neural links

This statement seems like something most people would agree with until you dig deeper.

Do you think there should be zero filters? What about threats of violence? Targeted harassment? Terrorist recruiting? Child abuse? State-sponsored propaganda?

Living animal brains continuously prune malfunctioning cells and connections to stay healthy. Is it possible that some filtering of the "big collective brain" is similarly essential to healthy functioning?

Do you have specific examples of content that is being filtered that, if it weren't filtered, would likely result in better collective decisions?

If you have specific examples, are they pervasive enough to warrant shutting down beneficial filtering?


I wasn't making normative claims; I was being descriptive.

The politicians object to the fact that so many filters are under the control of one entity, which isn't them.

I'd also point out that there was an Internet prior to Facebook, and if anything, I'd actually call it incrementally less of a cesspool. Without everybody piled into one big room it was way easier to carve out a view of the Internet that wasn't so full of concentrated crap. Lots of people keeping their own bits of the net clean to their standards worked for me.

Now I will be normative. I see the Internet as a reflection and a consequence of society. To say that you're going to, say, eliminate all child porn from the Internet is saying that you're going to eliminate all child porn from the world. I mean that very literally; no metaphor. You can't do the former with anything less than what it would take to do the latter, and there's no reason to believe that's possible.

You can't solve this problem with filters. It doesn't matter whether that truth is a good thing or not, it simply is the truth. Your text casually assumes the filters are effective, and thus offers a seemingly unresolvable dilemma about "why do you want so much child porn on the internet, jerf?" but the filters are not effective. So, given the filters can't do those things, but can do other things, it's unsurprisingly that the politicians take an interest in this.

This is really just the old question "why can't the government solve all the problems with more regulation and more people looking over more shoulders?" translated into digital terms, and the answer is, by the time the government is large enough to do that, the people the government has to be made out of have brought the problems in with them, only now they're where they can hide themselves easily.


> eliminate all child porn from the world

Somewhat reducing CP is good, even if you don't eliminate it completely.

> the filters are not effective

Using your chosen example, there is very little CP on Facebook, Instagram, YouTube, Twitter and Amazon. That's prima facie evidence that filters can be effective, even if they're less than 100% effective.

> why can't the government solve all the problems with more regulation

I'm starting to see the theme of your argument now. If X achieves less than 100% perfection, then we should eliminate X, whether X is content filtering or government regulation.

Some drivers fail to stop at red lights and stop signs, causing accidents and injuries. But few people would agree to eliminate red lights and stop signs. Even though they're flawed, they are better than nothing at busy intersections.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: