All posts in “Propaganda”

Tumblr confirms 84 accounts linked to Kremlin trolls

Tumblr has confirmed that Kremlin trolls were active on its platform during the 2016 US presidential elections.

In a blog post today the social platform writes that it is “taking steps to protect against future interference in our political conversation by state-sponsored propaganda campaigns”.

The company has also started emailing users who interacted with 84 accounts it now says it has linked to the Russian trollfarm, the Internet Research Agency (IRA).

In the blog post it says it identified the accounts last fall — and “notified law enforcement, terminated the accounts, and deleted their original posts”.

“Behind the scenes, we worked with the Department of Justice, and the information we provided helped indict 13 people who worked for the IRA,” it adds.

In an email sent to a user, which was passed to TechCrunch to review, the company informs the individual they “either followed one of [11] accounts linked to the IRA, or liked or reblogged one of their posts”.

“As part of our commitment to transparency, we want you to know that we uncovered and terminated 84 accounts linked to Internet Research Agency or IRA (a group closely tied to the the Russian government) posing as members of the Tumblr community,” the email begins.

“The IRA engages in electronic disinformation and propaganda campaigns around the world using phony social media accounts. When we uncovered these accounts, we notified law enforcement, terminated the accounts, and deleted their original posts.”

Last month Buzzfeed News — working with researcher, Jonathan Albright, from the Tow Center for Digital Journalism at Columbia University — claimed to have unearthed substantial Kremlin troll activity on Tumblr’s meme-laden platform — identifying what they dubbed as “a powerful, largely unrevealed network of Russian trolls focused on black issues and activism” which they said dated back to early 2015.

The trolls were reported to be using Tumblr to push anti-Clinton messages, including by actively promoting Democrat rival Bernie Sanders.

Decrying racial injustice and police violence in the US was another theme of the Russian-linked content.

Since then The Daily Beast has also reported on leaked data from the IRA which also implied agents at the trollfarm had used Tumblr — and also Reddit — to spread political propaganda to target the 2016 US election.

Those IRA leaks suggested the IRA had created at least 21 Tumblr accounts — and included names replete with slang terms, including some accounts listed in the user email we’ve reviewed.

Tumblr, which is owned by TechCrunch’s parent company Oath, did not respond to an email we sent to their press office last month asking about possible Kremlin activity on its platform.

In today’s public post, the company writes: “As far as we can tell, the IRA-linked accounts were only focused on spreading disinformation in the U.S., and they only posted organic content. We didn’t find any indication that they ran ads.”

As well as emailing affected users, Tumblr says it will be keeping a public record of usernames linked to the IRA or “other state-sponsored disinformation campaigns”.

The full list of 84 Kremlin accounts on its public page is as follows:

It also suggests users step in and “correct the record” when they see others spreading misinformation, regardless of whether they believe it’s being done intentionally or not.

Concluding its email to the user who had unwittingly engaged with 11 of the identified IRA accounts, Tumblr adds: “We deleted the accounts but decided to leave up any reblog chains so that you can curate your own Tumblr to reflect your own personal views and perspectives.

“Democracy requires transparency and an informed electorate and we take our disclosure responsibility very seriously. We’ll be aggressively watching for disinformation campaigns in the future, take the appropriate action, and make sure you know about it.”

Asked how he feels to learn Kremlin trolls had unknowingly infiltrated his Tumblr feeds, the user told us: “It’s unsettling, although maybe not surprising, that we legitimize and signal boost bad actors on social platforms by ‘liking’ or reposting content that doesn’t appear to have any political agenda at first glance.”

Fake news is an existential crisis for social media 


The funny thing about fake news is how mind-numbingly boring it can be. Not the fakes themselves — they’re constructed to be catnip clickbait to stoke the fires of rage of their intended targets. Be they gun owners. People of color. Racists. Republican voters. And so on.

The really tedious stuff is all the also incomplete, equally self-serving pronouncements that surround ‘fake news’. Some very visibly, a lot a lot less so.

Such as Russia painting the election interference narrative as a “fantasy” or a “fairytale” — even now, when presented with a 37-page indictment detailing what Kremlin agents got up to (including on US soil). Or Trump continuing to bluster that Russian-generated fake news is itself “fake news”.

And, indeed, the social media firms themselves, whose platforms have been the unwitting conduits for lots of this stuff, shaping the data they release about it — in what can look suspiciously like an attempt to downplay the significance and impact of malicious digital propaganda, because, well, that spin serves their interests.

The claim and counter claim that spread out around ‘fake news’ like an amorphous cloud of meta-fakery, as reams of additional ‘information’ — some of it equally polarizing but a lot of it more subtle in its attempts to mislead (for e.g., the publicly unseen ‘on background’ info routinely sent to reporters to try to invisible shape coverage in a tech firm’s favor) — are applied in equal and opposite directions in the interests of obfuscation; using speech and/or misinformation as a form of censorship to fog the lens of public opinion.

This bottomless follow-up fodder generates yet more FUD in the fake news debate. Which is ironic, as well as boring, of course. But it’s also clearly deliberate.

As Zeynep Tufekci has eloquently argued: “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”

So we also get subjected to all this intentional padding, applied selectively, to defuse debate and derail clear lines of argument; to encourage confusion and apathy; to shift blame and buy time. Bored people are less likely to call their political representatives to complain.

Truly fake news is the inception layer cake that never stops being baked. Because pouring FUD onto an already polarized debate — and seeking to shift what are by nature shifty sands (after all information, misinformation and disinformation can be relative concepts, depending on your personal perspective/prejudices) — makes it hard for any outsider to nail this gelatinous fakery to the wall.

Why would social media platforms want to participate in this FUDing? Because it’s in their business interests not to be identified as the primary conduit for democracy damaging disinformation.

And because they’re terrified of being regulated on account of the content they serve. They absolutely do not want to be treated as the digital equivalents to traditional media outlets.

But the stakes are high indeed when democracy and the rule of law are on the line. And by failing to be pro-active about the existential threat posed by digitally accelerated disinformation, social media platforms have unwittingly made the case for external regulation of their global information-shaping and distribution platforms louder and more compelling than ever.

*

Every gun outrage in America is now routinely followed by a flood of Russian-linked Twitter bot activity. Exacerbating social division is the name of this game. And it’s playing out all over social media continually, not just around elections.

In the case of Russian digital meddling connected to the UK’s 2016 Brexit referendum, which we now know for sure existed — still without having all of the data we need to quantify the actual impact, the chairman of a UK parliamentary committee that’s running an enquiry into fake news has accused both Twitter and Facebook of essentially ignoring requests for data and help, and doing none of the work the committee asked of them.

Facebook has since said it will take a more thorough look through its archives. And Twitter has drip-fed some tidbits of additional infomation. But more than a year and a half after the vote itself, many, many questions remain.

And just this week another third party study suggested that the impact of Russian Brexit trolling was far larger than has been so far conceded by the two social media firms.

The PR company that carried out this research included in its report a long list of outstanding questions for Facebook and Twitter.

Here they are:

  • How much did [Russian-backed media outlets] RT, Sputnik and Ruptly spend on advertising on your platforms in the six months before the referendum in 2016?
  • How much have these media platforms spent to build their social followings?
  • Sputnik has no active Facebook page, but has a significant number of Facebook shares for anti-EU content, does Sputnik have an active Facebook advertising account?
  • Will Facebook and Twitter check the dissemination of content from these sites to check they are not using bots to push their content?
  • Did either RT, Sputnik or Ruptly use ‘dark posts’ on either Facebook or Twitter to push their content during the EU referendum, or have they used ‘dark posts’ to build their extensive social media following?
  • What processes do Facebook or Twitter have in place when accepting advertising from media outlets or state owned corporations from autocratic or authoritarian countries? Noting that Twitter no longer takes advertising from either RT or Sputnik.
  • Did any representatives of Facebook or Twitter pro-actively engage with RT or Sputnik to sell inventory, products or services on the two platforms in the period before 23 June 2016?

We put these questions to Facebook and Twitter.

In response, a Twitter spokeswoman pointed us to some “key points” from a previous letter it sent to the DCMS committee (emphasis hers):

In response to the Commission’s request for information concerning Russian-funded campaign activity conducted during the regulated period for the June 2016 EU Referendum (15 April to 23 June 2016), Twitter reviewed referendum-related advertising on our platform during the relevant time period. 

Among the accounts that we have previously identified as likely funded from Russian sources, we have thus far identified one account—@RT_com— which promoted referendum-related content during the regulated period. $1,031.99 was spent on six referendum-related ads during the regulated period 

With regard to future activity by Russian-funded accounts, on 26 October 2017, Twitter announced that it would no longer accept advertisements from RT and Sputnik and will donate the $1.9 million that RT had spent globally on advertising on Twitter to academic research into elections and civil engagement. That decision was based on a retrospective review that we initiated in the aftermath of the 2016 U.S. Presidential Elections and following the U.S. intelligence community’s conclusion that both RT and Sputnik have attempted to interfere with the election on behalf of the Russian government. Accordingly, @RT_com will not be eligible to use Twitter’s promoted products in the future.

The Twitter spokeswoman declined to provide any new on-the-record information in response to the specific questions.

A Facebook representative first asked to see the full study, which we sent, then failed to provide a response to the questions at all.

The PR firm behind the research, 89up, makes this particular study fairly easy for them to ignore. It’s a pro-Remain organization. The research was not undertaken by a group of impartial university academics. The study isn’t peer reviewed, and so on.

But, in an illustrative twist, if you Google “89up Brexit”, Google New injects fresh Kremlin-backed opinions into the search results it delivers — see the top and third result here…


Clearly, there’s no such thing as ‘bad propaganda’ if you’re a Kremlin disinformation node.

Even a study decrying Russian election meddling presents an opportunity for respinning and generating yet more FUD — in this instance by calling 89up biased because it supported the UK staying in the EU. Making it easy for Russian state organs to slur the research as worthless.

The social media firms aren’t making that point in public. They don’t have to. That argument is being made for them by an entity whose former brand name was literally ‘Russia Today’. Fake news thrives on shamelessness, clearly.

It also very clearly thrives in the limbo of fuzzy accountability where politicians and journalists essentially have to scream at social media firms until blue in the face to get even partial answers to perfectly reasonable questions.

Frankly, this situation is looking increasingly unsustainable.

Not least because governments are cottoning on — some are setting up departments to monitor malicious disinformation and even drafting anti-fake news election laws.

And while the social media firms have been a bit more alacritous to respond to domestic lawmakers’ requests for action and investigation into political disinformation, that just makes their wider inaction, when viable and reasonable concerns are brought to them by non-US politicians and other concerned individuals, all the more inexcusable.

The user-bases of Facebook, Twitter and YouTube are global. Their businesses generate revenue globally. And the societal impacts from maliciously minded content distributed on their platforms can be very keenly felt outside the US too.

But if tech giants have treated requests for information and help about political disinformation from the UK — a close US ally — so poorly, you can imagine how unresponsive and/or unreachable these companies are to further flung nations, with fewer or zero ties to the homeland.

Earlier this month, in what looked very much like an act of exasperation, the chair of the UK’s fake news enquiry, Damian Collins, flew his committee over the Atlantic to question Facebook, Twitter and Google policy staffers in an evidence session in Washington.

None of the companies sent their CEOs to face the committee’s questions. None provided a substantial amount of new information. The full impact of Russia’s meddling in the Brexit vote remains unquantified.

One problem is fake news. The other problem is the lack of incentive for social media companies to robustly investigate fake news.

*

The partial data about Russia’s Brexit dis-ops, which Facebook and Twitter have trickled out so far, like blood from the proverbial stone, is unhelpful exactly because it cannot clear the matter up either way. It just introduces more FUD, more fuzz, more opportunities for purveyors of fake news to churn out more maliciously minded content, as RT and Sputnik demonstrably have.

In all probability, it also pours more fuel on Brexit-based societal division. The UK, like the US, has become a very visibly divided society since the narrow 52: 48 vote to leave the EU. What role did social media and Kremlin agents play in exacerbating those divisions? Without hard data it’s very difficult to say.

But, at the end of the day, it doesn’t matter whether 89up’s study is accurate or overblown; what really matters is no one except the Kremlin and the social media firms themselves are in a position to judge.

And no one in their right mind would now suggest we swallow Russia’s line that so called fake news is a fiction sicked up by over-imaginative Russophobes.

But social media firms also cannot be trusted to truth tell on this topic, because their business interests have demonstrably guided their actions towards equivocation and obfuscation.

Self interest also compellingly explains how poorly they have handled this problem to date; and why they continue — even now — to impede investigations by not disclosing enough data and/or failing to interrogate deeply enough their own systems when asked to respond to reasonable data requests.

A game of ‘uncertain claim vs self-interested counter claim’, as competing interests duke it out to try to land a knock-out blow in the game of ‘fake news and/or total fiction’, serves no useful purpose in a civilized society. It’s just more FUD for the fake news mill.

Especially as this stuff really isn’t rocket science. Human nature is human nature. And disinformation has been shown to have a more potent influencing impact than truthful information when the two are presented side by side. (As they frequently are by and on social media platforms.) So you could do robust math on fake news — if only you had access to the underlying data.

But only the social media platforms have that. And they’re not falling over themselves to share it. Instead, Twitter routinely rubbishes third party studies exactly because external researchers don’t have full visibility into how its systems shape and distribute content.

Yet external researchers don’t have that visibility because Twitter prevents them from seeing how it shapes tweet flow. Therein lies the rub.

Yes, some of the platforms in the disinformation firing line have taken some preventative actions since this issue blew up so spectacularly, back in 2016. Often by shifting the burden of identification to unpaid third parties (fact checkers).

Facebook has also built some anti-fake news tools to try to tweak what its algorithms favor, though nothing it’s done on that front to date looks very successfully (even as a more major change to its New Feed, to make it less of a news feed, has had a unilateral and damaging impact on the visibility of genuine news organizations’ content — so is arguably going to be unhelpful in reducing Facebook-fueled disinformation).

In another instance, Facebook’s mass closing of what it described as “fake accounts” ahead of, for example, the UK and French elections can also look problematic, in democratic terms, because we don’t fully know how it identified the particular “tens of thousands” of accounts to close. Nor what content they had been sharing prior to this. Nor why it hadn’t closed them before if they were indeed Kremlin disinformation-spreading bots.

More recently, Facebook has said it will implement a disclosure system for political ads, including posting a snail mail postcard to entities wishing to pay for political advertising on its platform — to try to verify they are indeed located in the territory they say they are.

Yet its own VP of ads has admitted that Russian efforts to spread propaganda are ongoing and persistent, and do not solely target elections or politicians…

The wider point is that social division is itself a tool for impacting democracy and elections — so if you want to achieve ongoing political meddling that’s the game you play.

You don’t just fire up your disinformation guns ahead of a particular election. You work to worry away at society’s weak points continuously to fray tempers and raise tensions.

Elections don’t take place in a vacuum. And if people are angry and divided in their daily lives then that will naturally be reflected in the choices made at the ballot box, whenever there’s an election.

Russia knows this. And that’s why the Kremlin has been playing such a long propaganda game. Why it’s not just targeting elections. Its targets are fault lines in the fabric of society — be it gun control vs gun owners or conservatives vs liberals or people of color vs white supremacists — whatever issues it can seize on to stir up trouble and rip away at the social fabric.

That’s what makes digitally amplified disinformation an existential threat to democracy and to civilized societies. Nothing on this scale has been possible before.

And it’s thanks, in great part, to the reach and power of social media platforms that this game is being played so effectively — because these platforms have historically preferred to champion free speech rather than root out and eradicate hate speech and abuse; inviting trolls and malicious actors to exploit the freedom afforded by their free speech ideology and to turn powerful broadcast and information-targeting platforms into cyberweapons that blast the free societies that created them.

Social media’s filtering and sorting algorithms also crucially failed to make any distinction between information and disinformation. Which was their great existential error of judgement, as they sought to eschew editorial responsibility while simultaneously working to dominate and crush traditional media outlets which do operate within a more tightly regulated environment (and, at least in some instances, have a civic mission to truthfully inform).

Publishers have their own biases too, of course, but those biases tend to be writ large — vs social media platforms’ faux claims of neutrality when in fact their profit-seeking algorithms have been repeatedly caught preferring (and thus amplifying) dis- and misinformation over and above truthful but less clickable content.

But if your platform treats everything and almost anything indiscriminately as ‘content’, then don’t be surprised if fake news becomes indistinguishable from the genuine article because you’ve built a system that allows sewage and potable water to flow through the same distribution pipe.

So it’s interesting to see Goldman’s suggested answer to social media’s existential fake news problem attempting, even now, to deflect blame — by arguing that the US education system should take on the burden of arming citizens to deconstruct all the dubious nonsense that social media platforms are piping into people’s eyeballs.

Lessons in critical thinking are certainly a good idea. But fakes are compelling for a reason. Look at the tenacity with which conspiracy theories take hold in the US. In short, it would take a very long time and a very large investment in critical thinking education programs to create any kind of shielding intellectual capacity able to protect the population at large from being fooled by maliciously crafted fakes.

Indeed, human nature actively works against critical thinking. Fakes are more compelling, more clickable than the real thing. And thanks to technology’s increasing potency, fakes are getting more sophisticated, which means they will be increasingly plausible — and get even more difficult to distinguish from the truth. Left unchecked, this problem is going to get existentially worse too.

So, no, education can’t fix this on its own. And for Facebook to try to imply it can is yet more misdirection and blame shifting.

*

If you’re the target of malicious propaganda you’ll very likely find the content compelling because the message is crafted with your specific likes and dislikes in mind. Imagine, for example, your trigger reaction to being sent a deepfake of your wife in bed with your best friend.

That’s what makes this incarnation of propaganda so potent and insidious vs other forms of malicious disinformation (of course propaganda has a very long history — but never in human history have we had such powerful media distribution platforms that are simultaneously global in reach and capable of delivering individually targeted propaganda campaigns. That’s the crux of the shift here).

Fake news is also insidious because of the lack of civic restrains on disinformation agents, which makes maliciously minded fake news so much more potent and problematic than plain old digital advertising.

I mean, even people who’ve searched for ‘slippers’ online an awful lot of times, because they really love buying slippers, are probably only in the market for buying one or two pairs a year — no matter how many adverts for slippers Facebook serves them. They’re also probably unlikely to actively evangelize their slipper preferences to their friends, family and wider society — by, for example, posting about their slipper-based views on their social media feeds and/or engaging in slipper-based discussions around the dinner table or even attending pro-slipper rallies.

And even if they did, they’d have to be a very charismatic individual indeed to generate much interest and influence. Because, well, slippers are boring. They’re not a polarizing product. There aren’t tribes of slipper owners as there are smartphone buyers. Because slippers are a non-complex, functional comfort item with minimal fashion impact. So an individual’s slipper preferences, even if very liberally put about on social media, are unlikely to generate strong opinions or reactions either way.

Political opinions and political positions are another matter. They are frequently what define us as individuals. They are also what can divide us as a society, sadly.

To put it another way, political opinions are not slippers. People rarely try a new one on for size. Yet social media firms spent a very long time indeed trying to sell the ludicrous fallacy that content about slippers and maliciously crafted political propaganda, mass-targeted tracelessly and inexpensively via their digital ad platforms, was essentially the same stuff. See: Zuckerberg’s infamous “pretty crazy idea” comment, for example.

Indeed, look back over the last few years’ news about fake news, and social media platforms have demonstrably sought to play down the idea that the content distributed via their platforms might have had any sort of quantifiable impact on the democratic process at all.

Yet these are the same firms that make money — very large amounts of money, in some cases — by selling their capability to influentially target advertising.

So they have essentially tried to claim that it’s only when foreign entities engage with their digital advertising platforms, and used their digital advertising tools — not to sell slippers or a Netflix subscription but to press people’s biases and prejudices in order to sew social division and impact democratic outcomes — that, all of a sudden, these powerful tech tools cease to function.

And we’re supposed to take it on trust from the same self-interested companies that the unknown quantity of malicious ads being fenced on their platforms is but a teeny tiny drop in the overall content ocean they’re serving up so hey why can’t you just stop overreacting?

That’s also pure misdirection of course. The wider problem with malicious disinformation is it pervades all content on these platforms. Malicious paid-for ads are just the tip of the iceberg.

So sure, the Kremlin didn’t spend very much money paying Twitter and Facebook for Brexit ads — because it didn’t need to. It could (and did) freely set up ranks of bot accounts on their platforms to tweet and share content created by RT, for example — frequently skewed towards promoting the Leave campaign, according to multiple third party studies — amplifying the reach and impact of its digital propaganda without having to send the tech firms any more checks.

And indeed, Russia is still operating ranks of bots on social media which are actively working to divide public opinion, as Facebook freely admits.

Maliciously minded content has also been shown to be preferred by (for example) Facebook’s or Google’s algorithms vs truthful content, because their systems have been tuned to what’s most clickable and shareable and can also be all too easily gamed.

And, despite their ongoing techie efforts to fix what they view as some kind of content-sorting problem, their algorithms continue to get caught and called out for promoting dubious stuff.

Thing is, this kind of dynamic, contextual judgement is very hard for AI — as Zuckerberg himself has conceded. But human review is unthinkable. Tech giants simply do not want to employ the numbers of humans that would be necessary to always be making the right editorial call on each and every piece of digital content.

If they did, they’d instantly become the largest media organizations in the world — needing at least hundreds of thousands (if not millions) of trained journalists to serve every market and local region they cover.

They would also instantly invite regulation as publishers — ergo, back to the regulatory nightmare they’re so desperate to avoid.

All of this is why fake news is an existential problem for social media.

And why Zuckerberg’s 2018 yearly challenge will be his toughest ever.

Little wonder, then, that these firms are now so fixed on trying to narrow the debate and concern to focus specifically on political advertising. Rather than malicious content in general.

Because if you sit and think about the full scope of malicious disinformation, coupled with the automated global distribution platforms that social media has become, it soon becomes clear this problem scales as big and wide as the platforms themselves.

And at that point only two solutions look viable:

A) bespoke regulation, including regulatory access to proprietary algorithmic content-sorting engines.

B) breaking up big tech so none of these platforms have the reach and power to enable mass-manipulation.

The threat posed by info-cyberwarfare on tech platforms that straddle entire societies and have become attention-sapping powerhouses — swapping out editorially structured news distribution for machine-powered content hierarchies that lack any kind of civic mission — is really only just beginning to become clear, as the detail of abuses and misuses slowly emerges. And as certain damages are felt.

Facebook’s user base is a staggering two billion+ at this point — way bigger than the population of the world’s most populous country, China. Google’s YouTube has over a billion users. Which the company points out amounts to more than a third of the entire user-base of the Internet.

What does this seismic shift in media distribution and consumption mean for societies and democracies? We can hazard guesses but we’re not in a position to know without much better access to tightly guarded, commercially controlled information streams.

Really, the case for social media regulation is starting to look unstoppable.

But even with unfettered access to internal data and the potential to control content-sifting engines, how do you fix a problem that scales so very big and broad?

Regulating such massive, global platforms would clearly not be easy. In some countries Facebook is so dominant it essentially is the Internet.

So, again, this problem looks existential. And Zuck’s 2018 challenge is more Sisyphean than Herculean.

And it might well be that competition concerns are not the only trigger-call for big tech to get broken up this year.

Featured Image: Quinn Dombrowski/Flickr UNDER A CC BY-SA 2.0 LICENSE

Tumblr also lousy with Russia-backed US election trolls: Report


The meme-laden Tumblr platform is the latest social media and blogging outlet to be unmasked as a distribution channel for Russian agents to rip at America’s societal fault lines and seek to influence citizens’ voting habits, according to a report by BuzzFeed News.

Facebook and Twitter have been firmly in the spotlight on this issue since the shock result of the 2016 US presidential election. Google has also self-reported on Russian disinformation on its platforms. But the role of other social platforms in spreading Kremlin propaganda has faced less scrutiny thus far.

BuzzFeed worked with researcher, Jonathan Albright, from the Tow Center for Digital Journalism at Columbia University, to identify Russian-backed account activity on Tumblr. It says the analysis reveals “a powerful, largely unrevealed network of Russian trolls focused on black issues and activism” which dates back to early 2015.

Some of the Russian-linked blogging activity on Tumblr was apparently aimed at boosting support for Bernie Sanders at the expense of eventual Democratic candidate nominee Hillary Clinton. The Democratic nominee process concluded in July 2016. While the US presidential election itself was held on November 8, 2016.

“The evidence we’ve collected shows a highly engaged and far-reaching Tumblr propaganda-op targeting mostly teenage and twenty-something African Americans,” Albright is quoted as saying.

“This appears to have been part of an ongoing campaign since early 2015,” he added.

We’ve reached out to Tumblr owner Oath‘s press office with questions about the research — at the time of writing the company has not replied. (For the record Tumblr owner Oath is also TechCrunch’s parent company.)

Oath did not respond to BuzzFeed’s requests for comment on its research.

The methodology used for unmasking Russian agents on Tumblr appears to be a pretty simple one: The researchers cross-referenced Tumblr accounts that used “the same, or very similar” usernames from a list of known Internet Research Agency (IRA) accounts, previously submitted by Twitter to congressional investigators. (The IRA being one of the confirmed Russian trollfarms; others are also known to exist.)

Incidentally, last month Twitter updated this Russian bot list — saying it had now identified an additional 13k Russian-linked bots that had made election-related tweets, pushing the total number to more than 50,000. (Of those it said about 3,800 were linked to the IRA.)

In January Twitter also said it now thought that 1.4M people had engaged with Russian trolls during the US election.

It’s not yet clear how impactful Kremlin agents’ Tumblr dis-ops were. But the most successful of the Russian-linked Tumblr accounts identified by BuzzFeed’s analysis had apparently created multiple posts generating hundreds of thousands of “notes” on Tumblr (aka totaled likes, reblogs, replies etc).

The research also found Russian-linked Tumblr accounts cross-posting content from other social platforms — including Twitter and Instagram.

BuzzFeed says most of the accounts it linked to the IRA are no longer active on Tumblr, although it specifies that two are still sharing content on the platform (though it describes the content as “completely unrelated”, and speculates it’s possible that account ownership has since changed).

In terms of the types of socially divisive content being shared via these Russian-linked Tumblrs, BuzzFeed cites examples that sought to link Clinton to a former KKK leader; complained about unfair media coverage of a Sanders rally; and decried racial injustice and police violence in the US.

After Clinton won the Democratic nomination, some of the Russian-linked Tumblrs that had been backing Sanders apparently started pushing pro-Trump content.

The research also unearthed a network of links out from Tumblr to “thousands of still-remaining Twitter posts, black culture blogs, at least several hundred still-remaining Facebook posts, sign-ups for online petitions, and a number of Reddit threads related to pro-Bernie news, Hillary conspiracies, and in-classroom racial matters”, according to Albright.

Given the cross-referencing method that was used to ID Russian activity it’s entirely possible other Kremlin-backed Tumblr accounts existed on the platform (and/or still exist) — but which have yet to be identified.

UK to set up security unit to combat state disinformation campaigns


The UK government has announced plans to set up a dedicated national security unit to combat state-led disinformation campaigns — raising questions about how broad its ‘fake news’ bullseye will be.

Last November UK prime minister Theresa May publicly accused Russia of seeking to meddle in elections by weaponizing information and spreading fake news online.

“The UK will do what is necessary to protect ourselves, and work with our allies to do likewise,” she said in her speech at the time.

The new unit is intended to tackle what the PM’s spokesperson described in comments yesterday as the “interconnected complex challenges” of “fake news and competing narratives”.

The decision to set it up was taken after a meeting this week of the National Security Council — a Cabinet committee tasked with overseeing issues related to national security, intelligence and defense.

“We will build on existing capabilities by creating a dedicated national security communications unit. This will be tasked with combating disinformation by state actors and others. It will more systematically deter our adversaries and help us deliver on national security priorities,” the prime minister’s spokesperson told reporters (via Reuters).

According to the PressGazette, the new unit will be named the National Security Communications Unit and will be based in the Cabinet Office.

“The government is committed to tackling false information and the Government Communications Service (GCS) plays a crucial role in this,” a Cabinet Office spokesperson told the publication. “Digital communications is constantly evolving and we are looking at ways to meet the challenging media landscape by harnessing the power of new technology for good.”

Monitoring social media platforms is expected to form a key part of the unit’s work as it seeks to deter adversaries by flagging up their fakes. But operational details are thin on the ground at this point. UK defense secretary, Gavin Williamson, is expected to give a statement to parliament later this week with more details about the unit.

Writing last week (in PR Week) about the challenges GCS faces this year, Alex Aiken, executive director of the service, named “build[ing] a rapid response social media capability to deal quickly with disinformation and reclaim[ing] a fact-based public debate with a new team to lead this work in the Cabinet Office” as the second item on his eight-strong list.

A key phrase there is “rapid response” — given the highly dynamic and bi-directional nature of some of the disinformation campaigns that have, to date, been revealed spreading via social media. Though a report in the Times suggests insiders are doubtful that Whitehall civil servants will have the capacity to respond rapidly enough to online disinformation.

Another key phrase in Aiken’s list is “fact-based” — because governments and power-wielding politicians denouncing ‘fake news’ is a situation replete with irony and littered with pitfalls. So a crucial factor regarding the unit will be how narrowly (or otherwise) its ‘fake news’ efforts are targeted.

If its work is largely focused on identifying and unmasking state-level disinformation campaigns — such as the Russian-backed bots which sought to interfere in the UK’s 2016 Brexit referendum — it’s hard to dispute that’s necessary and sensible.

Although there are still lots of follow-on considerations, including diplomatic ones — such as whether the government will expend resources to monitor all states for potential disinformation campaigns, even political allies.

And whether it will make public every disinformation effort it identifies, or only selectively disclose activity from certain states.

But the PM’s spokesperson’s use of the phrase ‘fake news’ risks implying the unit will have a rather broader intent, which is concerning — from a freedom of the press and freedom of speech perspective.

Certainly it’s a very broad concept to be deploying in this context, especially when government ministers stand accused of being less than honest in how they present information. (For one extant example, just Google the phrase: “brexit bus”.)

Indeed, even the UK PM herself has been accused domestically on that front.

So there’s a pretty clear risk of ‘fake news’ being interpreted by some as equating to any heavy political spin.

But presumably the government is not intending the new unit to police its own communications for falsities. (Though, if it’s going to ignore its own fakes, well it opens itself up to easy accusations of double standards — aka: ‘domestic political lies, good; foreign political lies bad’… )

Earlier this month the French president, Emmanuel Macron — who in recent months has also expressed public concern about Russian disinformation — announced plans to introduce an anti-fake news election law to place restrictions on social media during election periods.

And while that looks like a tighter angle to approach the problem of malicious and politically divisive disinformation campaigns, it’s also clear that a state like Russia has not stopped spreading fake news just because a particular target country’s election is over.

Indeed, the Kremlin has consistently demonstrated very long term thinking in its propaganda efforts, coupled with considerable staying power around its online activity — aimed at building plausibility for its disinformation cyber agents.

Sometimes these agents are seeded multiple years ahead of actively deploying them as ‘fake news’ conduits for a particular election or political event.

So just focusing on election ‘fake news’ risks being too narrow to effectively combat state-level disinformation, unless combined with other measures. Even as generally going after ‘fake news’ opens the UK government to criticism that it’s trying to shut down political debate and criticism.

Disinformation is clearly a very hard problem for governments to tackle, with no easy answers — even as the risks to democracy are clear enough for even Facebook to admit them.

Yet it’s also a problem that’s not being helped by the general intransigence and lack of transparency from the social media companies that control the infrastructure being used to spread disinformation.

These are also the only entities that have full access to the data that could be used to build patterns and help spot malicious bot-spreading agents of disinformation.

Last week, in the face of withering criticism from a UK committee that’s looking into the issue of fake news, Facebook committed to taking a deeper look into its own data around the Brexit referendum.

At this point it’s not clear whether Twitter — which has been firmly in the committee’s crosshairs — will also agree to conduct a thorough investigation of Brexit bot activity or not.

A spokeswomen for the committee told us it received a letter from Twitter on Friday and will be publishing that, along with its response, later this week. She declined to share any details ahead of that.

The committee is running an evidence session in the US, scheduled for February 8, when it will be putting questions to representatives from Facebook and Twitter, according to the spokeswoman. Its full report on the topic is not likely due for some months still, she added.

At the same time, the UK’s Electoral Commission has been investigating social media to consider whether campaign spending rules might have been broken at the time of the EU referendum vote — and whether to recommend the government drafts any new legislation. That effort is also ongoing.

Featured Image: Thomas Faull/Getty Images

Twitter to notify users who got played by Russian propaganda accounts

Adding clarity.
Adding clarity.

Image: NURPHOTO/GETTY IMAGES

Over a half million Twitter users are about to be on the receiving end of an inbox surprise. 

No, not the news of an unexpected verification. Nor something more prosaic, such as their unwitting participation in a new feature test group. Rather, Twitter will be dropping a little email truth bomb: You got played by a Russian troll army. 

In a Friday blog post, the social media giant said it plans to inform 677,775 people who, over the course of the 2016 presidential “election period,” followed, liked, or retweeted accounts “potentially” connected to the now infamous Internet Research Agency.

“In total, during the time period we investigated, the 3,814 identified IRA-linked accounts posted 175,993 Tweets, approximately 8.4% of which were election-related,” Twitter explained in its blog post. 

An example of IRA content.

An example of IRA content.

Image: Twitter

This is all part of Twitter’s continued efforts to notify both the public and elected officials of just how far Russian-connected groups went to influence the 2016 presidential election via social media. 

“[We] have identified 13,512 additional accounts, for a total of 50,258 automated accounts that we identified as Russian-linked and Tweeting election-related content during the election period,” the company added, “representing approximately two one-hundredths of a percent (0.016%) of the total accounts on Twitter at the time.”

All the suspicious accounts in question have been suspended, noted Twitter’s blog post. 

As to what those notification emails will say, and whether they will detail the exact troll content users engaged with, remains a mystery. We reached out to the company for a sample email, but that request went unacknowledged as of press time. However, 677,775 Twitter users should be finding out soon enough. 

Whether or not those receiving the notification emails will take it as a lesson to be a tad bit more skeptical in their future social media dealings remains to be seen, but here’s hoping. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f84175%2f3eae8f90 ce4c 4a35 92a6 f5d7b9b1d586