If Russia can create fake BLM accounts, who will next? – The Royal Gazette | Bermuda News, Business, Sports, Events, & Community

Log In

Reset Password

If Russia can create fake BLM accounts, who will next?




Send a spy to spread rumours on the other side of the front line. Drop leaflets into enemy territory. Debilitate the enemy using its own people, in their own language — Lord Haw-Haw, Tokyo Rose — over their own radios. The tactics of demoralisation are as old as politics, as old as war, and now we know what the “second decade of the 21st-century” version looks like, too.

Pushed by a congressional investigation, Facebook has finally turned over some 3,000 advertisements and links to pages created and paid for by Russian trolls. Among them was “Secured Borders”, a fake, Kremlin-backed “organisation” that appeared to be based in Idaho. It pumped out messages about immigrant “scum” and attracted 133,000 followers before it was shut down. In August 2016, its Russian backers actually promoted a rally in Twin Falls to protest an alleged “upsurge of violence against American citizens”.

At the same time, a different set of Russian operatives sponsored and advertised two black rappers who bashed “racist b***h” Hillary Clinton. They also borrowed the identity of a Muslim group that claimed Clinton “created, funded and armed” al-Qaeda and the Islamic State. Meanwhile, thousands of computerised bots pushed repetitive pro-Trump messages on Twitter, persuading many actual humans to respond.

All these games are familiar: Russians have used similar tactics for years in Europe, where pro-Russian social-media users on Facebook, Twitter and many other platforms have long sought to amplify support for parties of the far Left and the far Right. During Germany's recent elections, official Russian media and networks of Russian bots tweeted and posted messages warning of immigration's dire threat to Germany and pushing the cause of Alternative for Germany, an anti-immigrant party.

As in the past, the Russian advertisements did not create ethnic strife or political divisions, either in the United States or in Europe. Instead, they used divisive language and emotive messages to exacerbate existing divisions. As in the past, it is enormously misleading to name “Russia” as the source of the problem.

The old KGB had whole departments devoted to the invention of rumours and the creation of fake extremists; the KGB's institutional descendants simply realised, sooner than most, that social-media campaigns are a cheap way for an impoverished former superpower to meddle in other countries' politics. But in 2016, they were one of many groups — among them the Trump campaign and a whole network of conspiracy-minded and alt-right trolls — who built targeted Facebook groups and bought divisive advertisements aimed at carefully sliced and segmented bits of the population.

The real problem is far broader than Russia: who will use these methods next — and how? If Russians worked out how to create fake “Black Lives Matter” Twitter accounts, why can't others? I can imagine multiple groups, many of them proudly American, who might well want to manipulate a range of fake accounts during a riot or disaster to increase anxiety or fear. I can imagine a lot of people who might want to take control of Defence Department accounts, as Russian hackers also tried to do, to send false information during a military conflict.

There is no big barrier to entry in this game: it doesn't cost much, it doesn't take much time, it isn't particularly high-tech, and it requires no special equipment. Facebook, Google and Twitter, not Russia, have provided the technology to create fake accounts and false advertisements, as well as the technology to direct them at particular parts of the population. Many other countries and political groups — on the Left, the Right, you name it — will quickly figure out how to use them.

In part, this malicious world grew so quickly out of ignorance — people didn't know, simply, how this all worked — but that is not an excuse any longer. There is no reason existing laws on transparency in political advertising, on truth in advertising or indeed on libel should not apply to social media as well as traditional media.

There is a better case than ever against anonymity, at least against anonymity in the public forums of social media and comment sections, as well as for the elimination of social-media bots. Facebook's own experiments have shown that conversations are more civilised when people use their own names. The right to free speech is something that is granted to humans, not bits of computer code.

There is no chance that the Trump White House will show any leadership on this issue, given that it has been the main beneficiary of these damaging and divisive techniques. But other political leaders — in Congress, in the states — have an obligation to think about it. So do citizens, so do schools; and so do tech companies. The alternative is a dystopia in which election-year dirty tricks become a way of life for everyone.

•Anne Applebaum writes a biweekly foreign affairs column for The Washington Post

Propagandist: Iva Toguri, known as Tokyo Rose, was an American who made English-language broadcasts transmitted by Japan to Allied soldiers during the Second World War




You must be registered or signed-in to post comment or to vote.

Published October 17, 2017 at 8:00 am (Updated October 17, 2017 at 1:18 pm)

If Russia can create fake BLM accounts, who will next?

What you
Need to
1. For a smooth experience with our commenting system we recommend that you use Internet Explorer 10 or higher, Firefox or Chrome Browsers. Additionally please clear both your browser's cache and cookies - How do I clear my cache and cookies?
2. Please respect the use of this community forum and its users.
3. Any poster that insults, threatens or verbally abuses another member, uses defamatory language, or deliberately disrupts discussions will be banned.
4. Users who violate the Terms of Service or any commenting rules will be banned.
5. Please stay on topic. "Trolling" to incite emotional responses and disrupt conversations will be deleted.
6. To understand further what is and isn't allowed and the actions we may take, please read our Terms of Service
7. To report breaches of the Terms of Service use the flag icon