The article points out PR firms in Nigeria, Egypt and UAE.
> Increasingly, we're seeing more PR firms or strategic communication firms offer computational propaganda as a service for all sorts of clients, including governments.
Now consider this, if this happens at this scale for the small cost of $150.000 how many pr-firms are engaging in propaganda against some commercial target right now without your ever noticing?
Reminds me of a recent New Yorker piece where a local oversight group in a small California town tried to push out corrupt healthcare managers through elections, only to be subjected to massive online influence operations. Fascinating and terrifying[0]
Notably absent in heap of recent articles about social media manipulations are western governments. I have a hard time believing that everyone is doing it but us, and our guys don't want/have that capability. This is hardly something that can be brought up to speed anywhere but in the field, so I'd expect there to be traces.
Now there could be many of reasons for that absence of news. Media could be constrained by gov. More likely is social networks being forced to keep silent. Soft power of / direct access to those US companies might be good enough for US intelligence to not warrant further actions risking trust in the platforms. Actions might take place on battlefields far away & not observed by western media (like, how would you even start getting around e.g. ID requirements in china... seems kind of hard).
Anyway, I just found noteworthy how well this one fits US geopolitical interests.
There are a lot of fake posts promoting one party or another. I doubt it's all foreign interference. I'm pretty sure most of the political advertising/consulting firms provide some kind of "organic" boost, that being fake account/posts instead of just ads management. This is really the new SEO.
Happens on Reddit all the time once you know how to notice it, e.g. posts about the same topic on multiple subreddits getting over 10,000 upvotes in less than 3 hours, sometimes more than there are users of that subreddit!
Now consider this, if this happens at this scale for the small cost of $150.000 how many pr-firms are engaging in propaganda against some commercial target right now without your ever noticing?
You mean like the way certain east (and probably west) coast billionaires spending millions to influence elections in the flyover states?
I think everyone has taken notice. The propaganda right now is too all encompassing so we can't help but to take notice.
Even on HN the past few years the number of propaganda posts and comments has gone up and up and up. Continues to increase right now.
What can people do though? Other people have the right to free speech. You just have to try to tune out all of the Anti-Tech, Anti-China, Anti-US, Anti-Amazon, Anti-Iran, Anti-Trump, Anti-Immigrant, Anti-Obama, Anti-minority, Anti-etc etc etc blather.
> Even on HN the past few years the number of propaganda posts and comments has gone up and up and up. Continues to increase right now.
How do you know this? What do you use to decide? As far as I can tell, the way most people decide this is: if I don't like it, it's propaganda. Nobody thinks that consciously, of course, but it matches what people say surprisingly closely. That has no evidentiary value, except as an indication of what someone dislikes. You learn this very quickly if you're responsible for the community as a whole, because people have such different likes and dislikes that basically everything contentious gets dismissed as 'propaganda' or some such abuse.
Two things seem clear: (1) propaganda exists; (2) "I don't like it" is no way to identify it. There needs to be something more objective. When people are worried about abuse on HN and we look into it, our bar is that we have to see at least some evidence, somewhere, of something amiss. That's as low as the bar can get, yet even that is enough to reveal that way over 90%, probably over 99%, of perceptions of propaganda have no basis beyond the feelings of the perceiver.
Throwing some ideas here since it's a bit of a meta-discussion but on topic.
I think other social media sites have realized that there's more than "legitimate users" as well. Identifying propaganda is tricky and it certainly can't be just "because I don't like it" but there are some factors that can be checked:
- Accounts focused on one or few (thorny) themes (they might talk about other things for some "reputation laundering" but they're mostly focused on some specific subjects)
- Activity times not corresponding to the purported user location (and centered around "working hours").
- Linguistic patterns also not corresponding to the purported user location (or suggesting usage of automated translation).
I can't speak for the person you are replying to, but I already decided, so I don't need to show evidence for anything. I feel so much less guilty about all the time I waste here having total confidence that tier 1 state actors are in fact using their very best operatives and technology (not the low end crap they use to fool the rubes on Facebook, Youtube, and editors of newspapers of record) to wage a highly sophisticated influence operation against us. Surely if they could only succeed in brainwashing us, the users of HN, to go along with their dastardly plans the entire free world will collapse and world domination will be theirs.
So yeah whatever, say it's not happening, based on all the evidence that having complete access to the site gives you. I don't believe you. I just know, deep down, based on my limited unprivileged access and my casual analysis of a small part of the content, that anyone who disagrees with my fundamental beliefs on here is a total shill.
I'm reading that as satire (edit: though your last line was so uncannily realistic I really had to triple-check). To reply seriously to one bit: I'm not saying it isn't happening. I'm just asking the people who are sure it's happening, how they know. Either they have a way of deciding this that actually works, in which case we need to know what it is, so we can use it—or they don't, in which case the answers are interesting for other reasons.
I don't believe it's happening here unless we want to count the Rust Evangelism Strike Force. If it were then you would have noticed or people doing analysis on the public data would have posted reports of accounts involved and how their activity is coordinated. Look at all the posts over the years breaking down what gets to the front page and who posts it. It's not as if there aren't users who are actually paying attention. Actual campaigns have been active on Facebook, Twitter and Reddit, and indeed there are reports making specific and detailed accusations beyond "I don't like your comment history".
Joking aside, I believe that the minimum standard for a "influence operation" is where "operators" use a greater number of sock puppet accounts in a centrally coordinated manner to dominate the discussion forum. There has to be deception: pretending to be multiple people.
If I ever see activity that appears to meet that standard I'll report my evidence of it to the admins, trust they will take appropriate action if necessary, and continue to assume everyone who doesn't agree with me is a reasonable person acting in good faith.
Could you quantify some of your assertions, so I can better understand your perspective? I'm not looking for sources and serious analysis, but just looking for more precise numbers. What % of posts do you think are manipulated? How much worse has that number gotten over time?
I think state actors would be best served by making bots to upvote comments that serve their needs rather than directly commenting their position.
Ballpark, I'd estimate 1% of comments on sensitive topics, but 20% of votes on these topics are done with purpose of intentionally misleading the public.
It was satire, I was only kidding. As of this time I believe about 0% of posts here are manipulated. It's not to say we're not all exposed to propaganda and spin all the time, just that HN is not a forum that makes sense to target - it's too small.
Don't you know? Everyone on Hacker News is a bot. Even me. Even you.
\end{jest}
More seriously...
The thing about propaganda is that once someone starts believing it, that someone can potentially (and often does) spread it organically. Hacker News can inadvertently be a hotbed for propaganda spreading of all sorts without even a single bot or paid poster/commenter. I know I've fallen victim to that before, parroting something that ended up being misleading at best because I had been convinced through disinformation that it was the truth and therefore needed spread. I therefore tend to be cognizant of when others appear to be doing the same, seeing in them the same patterns I expressed.
In other words: just because it ain't violating the HN community guidelines doesn't mean it ain't propaganda.
Good question. I don't know if there are any objective criteria, given that the difference between a propaganda campaign and a more benign information campaign is often pretty blurry and subjective.
I think the best bet is to call out (or at least note) the common patterns as they pop up. For example, I know you happen to hold a dim view of accusing others of "whataboutism", but that does happen to be a very common form of modern-day organically-spreading propaganda ("Who are you to say my country does evil things when your country does evil things, too?", says the accidental propagandist, forgetting that two wrongs don't make a right and that one country doing evil things does not excuse another country from doing evil things), and it's important to be able to identify that for what it is. Granted, sometimes that line of thinking is often valuable in the reverse (i.e. "I don't think it's right that your country is doing this evil thing, so I shouldn't be okay with my country doing a similar evil thing."), but it doesn't seem like those sorts of comparisons are typically done with that sort of self-awareness or good-faith desire to understand the other side of the debate.
"Accidental propaganda" is an oxymoron. What's the line between that and just being wrong or disagreeing? It isn't bad arguments; people are even more likely to call 'propaganda' (or astroturfing, shilling, etc.) against good arguments, because good arguments one dislikes are even more activating than bad arguments one dislikes.
The problem is that there are two different phenomena surrounding those words. The first is real abuse; the second is people labeling comments they dislike as abuse because they can't imagine anyone could hold them in good faith. As far as we can tell, the second problem is much more widespread than the first.
As far as I can tell, the way most people decide this is: if I don't like it, it's propaganda
I don't think that "most" is accurate.
It is not hard to recognize propaganda, even when it coincides with one's established views.
In elementary school we had many quizzes and drills on this sort of stuff as part of our civics courses. The problem is, I doubt schools teach this stuff anymore, even though this kind of very basic critical thinking is very important for a functioning society.
I'm talking about the things people say on internet forums like HN about propaganda and related terms (astroturfing, shilling, brigading, foreign agents). At least on HN I can tell you that "most" is an understatement. That's plain from the comments and even more plain in the voting data. (I should make that more precise, though. Obviously I'm not reading people's minds to know why they say 'propaganda', etc.; it's rather that what people say about this is overwhelmingly correlated with whatever their view on the underlying issue happens to be.)
Elementary school is a pretty different context. Schoolchildren may be more reliable on this, actually. Although they tend to imbibe the views of their parents, they haven't yet had time to build up the same fixed emotional habits around them.
Those civic classes only prepared people for the 20th century world of propaganda comprised of overly-enthusiaatic pamphlets and loudspeakers, and to their credit did a good job of it. The problem is that those people are still unequipped to decide whether WeebFoxMaster13 is a real person or some marketer being paid $10 an hour in order to sell them on the idea of political activism
I don't think so. Propaganda works by convincing you without making you suspicious. If you can recognize it, it has already failed. Of course, not all propaganda is actually concerned with convincing you. Most of it is probably only concerned with convincing a majority, your minority opinion might not be worth the effort.
I think this hinges on one's definition of propaganda. I believe most propaganda starts as being spread by state actors - but the majority of the spread is probably done by normal citizens.
If an intelligence agency was directly posting false or misleading information on HN for the purposes of influencing readers, presumably if HN mods found evidence of this they would try to restrict it to the extent possible.
The much more challenging issue is that once a propaganda meme has taken hold in a normal citizen, they will also spread these ideas. This creates a nearly-impossible to moderate situation where a percentage of the population sincerely holds a belief and chooses to spread it.
For example: in recent threads about the release of Snowdens book, there were some comments that presented ideas as fact which are contradicted by publicly available evidence. I think it is fair to call this 'propaganda', in the sense that many of these ideas were likely deliberately crafted by intelligence orgs at inception, but these is no actionable moderation that can occur to prevent it. Impossible to tell if someone spreading an incorrect fact is doing so out of sincere belief or ulterior motive.
I agree that it seems like more propaganda has been appearing on HN in the past couple years, but sadly I think it is impossible to do anything to prevent it.
Any message board participant with even the most casual desire to score points and dunk on people could connect virtually any argument to "state-generated propaganda" using this logic.
Just to add on a bit to other replies, banning people from a private service, socially ostracizing them, vigorously criticizing them, attempting legal forms of economic retaliation (like boycotts), etc are all part of Free Speech too. Free Speech is about not having physical force used to shutdown the market of ideas, because of the millennia of proven destructive effects that has. That doesn't mean all speech and ideas are of equal value though, far from it. The point is to push judging of that into softer spheres, where minds can potentially change (sometimes over generations) and room can remain on the margins for new things to put down roots, and where hopefully eventually higher value ideas can grow. But constant judging is still there.
Constant non-violent but vigorous efforts to resist propaganda and the like through revealing it and acting to counter and restrain it are real actions, and completely within the bounds of Free Speech.
Yes, the biggest problem of these programs is that it's deliberately divisive and increases fragmentation into binary pro/anti factions. At the end of the day this always generates violence, hatred and distrust.
As has been pointed out, you don't have a right to free speech on other people's servers. Y Combinator and Facebook can ban them at their discretion, for any reason or no reason.
Things seem to be getting looser, not tighter.
Facebook have just changed their policies to make it much harder to challenge false advertising by politicians.
This appears to have been done to avoid trouble with Trump.
https://popular.info/p/facebook-says-trump-can-lie-in-his
It would be simpler and more honest to stamp every single political ad with something like "Contains: lies, untruths, omissions and/or misinformation."
Then at least something in a displayed political ad would be true.
Facebook owns a lot of the virtual "space" where this discussion takes place. If they can willy-nilly make changes that favor some and disadvantage others, they can shape the debate pretty heavily.
They're ok with lies that are paid advertisements.
Even though both the Sinclair/Murdoch TV/radio web have vastly greater influence and audiences, explicit cultivate and advertise their bias and run stories uncorroborated anywhere else in the free world, they are not mentioned. This is because the complaint isn't about objectivity, it's about silencing information sources not in-step with the ruling party.
So propaganda is not allowed, by political ads that contain lies are allowed. Aren't lies in political ads by the ruling party or candidate ... propaganda?
It's about as consistent as people complaining about the Kremlin lying on Facebook, but not some billionaire lying on Facebook.
Bonus points when you don't even know who he is, because his money has been laundered through a PAC. Oh, and we get to subsidize his speech, because it's tax-deductible for him.
Double bonus points if, in a slightly different situation, that billionaire has been borrowing money for his various business ventures from friends of the Kremlin, and happens to look a wee bit orange...
You also missed the other subsidy. TV ad spots for political ads have their prices frozen at the beginning of the election season. When the season heats up, other advertisers get pushed out as their prices increase, but politicians keep the frozen price.
You are correct. The reason this happens made sense at the time it was done, but doesn't anymore.
At the time these rules were enacted, TV stations didn't want to carry political ads, and hardly any would accept them. The enforced rate structure was devised not only to encourage TV stations to carry political ads, but to ensure that a party with deeper pockets didn't drown out less-well-funded opinions.
When I worked in TV, the sales department hated political season because their bonuses got slashed.
A couple of economic crises later, and TV stations look forward to political ads because they it turns out they are more stable, reliable, and often pay in advance. The old standby for TV ads, car dealers, turned out to be not-so-reliable after all, especially with the automaker bankruptcies and people keeping cars longer.
With a bird in the hand worth two in the bush, we now have political ads out the wazoo.
political ads have reporting and disclaimers, or minimally an attempt at transparency. Bots/propaganda online pretends to be someone else, does not proclaim the source.
The problems on Twitter are worse. There are networks of verified accounts pumping out orchestrated propaganda on this topic, and others. A blue check mark seems to be a license to generate money with impunity. There are networks operating out of first world countries with verified accounts, that have third-world minions that amplify their messages through retweets and likes. It is surprising that corporations continue to associate with the open sewer of propaganda, hate and harassment that Twitter is. Has the time come for a clean clone of Twitter for business to interact with their customers?
"The censor was a magistrate in ancient Rome who was responsible for maintaining the census, supervising public morality, and overseeing certain aspects of the government's finances.
The power of the censors was absolute: no magistrate could oppose their decisions, only another censor who succeeded them could cancel it.
The censors' regulation of public morality is the origin of the modern meaning of the words censor and censorship."
I'm not talking about this specific case.
As to what you bring up, how platforms determine what real accounts are is a black box, we don't know how biased their machine learning algorithms and human reviewers are.
It’s odd that in the entire article they don’t mention anything about the actual content of this propaganda against Iran. I wonder for example, was the current news about Iran attacking oil tankers part of this campaign?
Aren't Russia and Iran more allies than enemies? I mean, yes, I'd agree, but rather because Putin would welcome any type disinformation, not just ones targeted at Iran.
These were propaganda pages against Iran and Qatar. The article says they originate from UAE and Egypt, but not necessarily from their governments. It also says some of the messages are supportive of Saudi Arabia. Currently, the US, Israel and Saudi Arabia are subjecting Iran to an economic blockade and/or pushing for a military attack. Russia instead is at least partially on Iran's side and, therefore, opposed to US and Saudi Arabia.
And your first reaction is "this could be/ must be Russia"? Do you realize Russia running a propaganda campaign for Saudi Arabia doesn't make any sense? How could you even think of something so counter-intuitive and inconsequential?
Does Facebook take down the propaganda accounts linked to the US? Or we are all still supposed to believe that the "good guys" stopped using propaganda after WWII?
Taking down anti-Iran propaganda would be something that does not align with US goals. I find it quite surprising that Facebook's doing this, to be honest.
FB is certainly free to clean up once they get inquiries from media. Inaction would make attribution to the US more easy, thus any NSL forcing FB to cooperate is likely limited in scope to internal measures like scanning for fake accounts.
"The Israel Project has developed a cutting edge production shop to inform the 21st century media and public conversation. Using videos, memes, infographics and other original productions – TIP is ensuring social media users know the truth about Israel and the Middle East."
"the Facebook page Cup of Jane. Serving the young women’s community, it has built up over 450,000 followers in that short time. ... Since then, we have launched multiple communities including HistoryBites for history fans, This Explains That for news junkies and We Only Have One Earth for environmentally-minded folks"
I did, no particular topic, just generally a lot (think "my comment history minus biting my tongue"), and a few years ago, after 9 years of using it and 100+ RL friends and plenty RL photos, I could no longer log in and was asked to send a copy of my ID, because I might not be a real person. I didn't, not because I think anything in that scan would be new information to them, but as a matter of principle, out of sheer stubborness and a sense of injustice -- they didn't tell me who reported me as not being a person, and I bet you none of the accusers had to show their ID.
But being "banned" from FB is much better than quitting... I have zero temptation to go back, it requires no discipline at all, so I'm not complaining.
So then you just created this account to tell us that Facebook banned you for no reason even though you posted nothing? Interesting, why did you do that?
> Increasingly, we're seeing more PR firms or strategic communication firms offer computational propaganda as a service for all sorts of clients, including governments.
Now consider this, if this happens at this scale for the small cost of $150.000 how many pr-firms are engaging in propaganda against some commercial target right now without your ever noticing?