All posts in “Propaganda”

Twitter to notify users who got played by Russian propaganda accounts

Adding clarity.
Adding clarity.

Image: NURPHOTO/GETTY IMAGES

Over a half million Twitter users are about to be on the receiving end of an inbox surprise. 

No, not the news of an unexpected verification. Nor something more prosaic, such as their unwitting participation in a new feature test group. Rather, Twitter will be dropping a little email truth bomb: You got played by a Russian troll army. 

In a Friday blog post, the social media giant said it plans to inform 677,775 people who, over the course of the 2016 presidential “election period,” followed, liked, or retweeted accounts “potentially” connected to the now infamous Internet Research Agency.

“In total, during the time period we investigated, the 3,814 identified IRA-linked accounts posted 175,993 Tweets, approximately 8.4% of which were election-related,” Twitter explained in its blog post. 

An example of IRA content.

An example of IRA content.

Image: Twitter

This is all part of Twitter’s continued efforts to notify both the public and elected officials of just how far Russian-connected groups went to influence the 2016 presidential election via social media. 

“[We] have identified 13,512 additional accounts, for a total of 50,258 automated accounts that we identified as Russian-linked and Tweeting election-related content during the election period,” the company added, “representing approximately two one-hundredths of a percent (0.016%) of the total accounts on Twitter at the time.”

All the suspicious accounts in question have been suspended, noted Twitter’s blog post. 

As to what those notification emails will say, and whether they will detail the exact troll content users engaged with, remains a mystery. We reached out to the company for a sample email, but that request went unacknowledged as of press time. However, 677,775 Twitter users should be finding out soon enough. 

Whether or not those receiving the notification emails will take it as a lesson to be a tad bit more skeptical in their future social media dealings remains to be seen, but here’s hoping. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f84175%2f3eae8f90 ce4c 4a35 92a6 f5d7b9b1d586

Facebook agrees to take a deeper look into Russian Brexit meddling


Facebook has said it will conduct a wider investigation into whether there was Russian meddling on its platform relating to the 2016 Brexit referendum vote in the UK.

Yesterday its UK policy director Simon Milner wrote to a parliamentary committee that’s been conducting a wide-ranging enquiry into fake news — and whose chair has been witheringly critical of Facebook and Twitter for failing to co-operate with requests for information and assistance on the topic of Brexit and Russia — saying it will widen its investigation, per the committee’s request.

Though he gave no firm deadline for delivering a fresh report — beyond estimating “a number of weeks”.

It’s not clear whether Twitter will also bow to pressure to conduct a more thorough investigation of Brexit-related disinformation. At the time of writing the company had not responded to our questions either.

At the end of last year committee chair Damian Collins warned both companies they could face sanctions for failing to co-operate with the committee’s enquiry — slamming Twitter’s investigations to date as “completely inadequate”, and expressing disbelief that both companies had essentially ignored the committee’s requests.

“You expressed a view that there may be other similar coordinated activity from Russia that we had not yet identified through our investigation and asked for us to continue our investigatory work. We have considered your request and can confirm that our investigatory team is now looking to see if we can identify other similar clusters engaged in coordinated activity around the Brexit referendum that was not identified previously,” writes Milner in the letter to Collins.

“This work requires detailed analysis of historic data by our security experts, who are also engaged in preventing live threats to our service. We are committed to making all reasonable efforts to establish whether or not there was coordinated activity similar to that which we found in the US and will report back to you as soon as the work has been completed.”

Last year Facebook reported finding just three Russian bought “immigration” ads relating to the Brexit vote — with a spend of less than $1. While Twitter claimed Russian broadcasters had spent around $1,000 to run six Brexit-related ads on its platform.

The companies provided that information in response to the UK’s Electoral Commission, which has been running its own investigation into whether there was any digital misspending relating to the referendum — handing the exact same information to the committee, despite its request for a more wide-ranging probe of Russian meddling.

In its Brexit report, Facebook also only looked at known Russian trollfarm the Internet Research Agency pages or account profiles — which it had previously identified in its US election disinformation probe.

While Twitter apparently made no effort to quantify the volume and influence of Russian-backed bots generating free tweet content around Brexit — so its focus on ads really looks like pure misdirection.

Independent academic studies have suggested there was in fact significant tweet-based activity generated around Brexit by Russian bots.

Last month a report by the US Senate — entitled Putin’s Asymmetric Assault on Democracy in Russia and Europe: Implications for US National Security — also criticized the adequacy of the investigations conducted thus far by Facebook and Twitter into allegations of Russian social media interference vis-a-vis Brexit.

“[I]n limiting their investigation to just the Internet Research Agency, Facebook missed that it is only one troll farm which ‘‘has existed within a larger disinformation ecosystem in St. Petersburg,’’ including Glavset, an alleged successor of the Internet Research Agency, and the Federal News Agency, a reported propaganda ‘‘media farm,’’ according to Russian investigative journalists,” the report authors write.

They also chronicle Collins’ criticism of Twitter’s ‘‘completely inadequate’’ response to the issue.

Featured Image: Bryce Durbin/TechCrunch/Getty Images

Study: Russia-linked fake Twitter accounts sought to spread terrorist-related social division in the UK


A study by UK academics looking at how fake social media accounts were used to spread socially divisive messages in the wake of a spate of domestic terrorists attacks this year has warned that the problem of hostile interference in public debate is greater than previously thought.

The researchers, who are from Cardiff University’s Crime and Security Research Institute, go on to assert that the weaponizing of social media to exacerbate societal division requires “a more sophisticated ‘post-event prevent’ stream to counter-terrorism policy”.

“Terrorist attacks are designed as forms of communicative violence that send a message to ‘terrorise, polarise and mobilise’ different segments of the public audience. These kinds of public impacts are increasingly shaped by social media communications, reflecting the speed and scale with which such platforms can make information ‘travel’,” they write.

“Importantly, what happens in the aftermath of such events has been relatively neglected by research and policy-development.”

The researchers say they collected a dataset of ~30 million datapoints from various social media platforms. But in their report they zero in on Twitter, flagging systematic use of Russian linked sock-puppet accounts which amplified the public impacts of four terrorist attacks that took place in the UK this year — by spreading ‘framing and blaming’ messaging around the attacks at Westminster Bridge, Manchester Arena, London Bridge and Finsbury Park.

They highlight eight accounts — out of at least 47 they say they identified as used to influence and interfere with public debate following the attacks — that were “especially active”, and which posted at least 427 tweets across the four attacks that were retweeted in excess of 153,000 times. Though they only directly name three of them: @TEN_GOP (a right-wing, anti-Islam account); @Crystal1Jonson (a pro-civil rights account); and @SouthLoneStar (an anti-immigration account) — all of which have previously been shuttered by Twitter. (TechCrunch understands the full list of accounts the researchers identified as Russia-linked has not currently been shared with Twitter.)

Their analysis found that the controllers of the sock puppets were successful at getting information to ‘travel’ by building false accounts around personal identities, clear ideological standpoints and highly opinionated views, and by targeting their messaging at sympathetic ‘thought communities’ aligned with the views they were espousing, and also at celebrities and political figures with large follower bases in order to “‘boost’ their ‘signal’” — “The purpose being to try and stir and amplify the emotions of these groups and those who follow them, who are already ideologically ‘primed’ for such messages to resonate.”

The researchers say they derived the identities of the 47 Russian accounts from several open source information datasets — including releases via the US Congress investigations pertaining to the spread of disinformation around the 2016 US presidential election; and the Russian magazine РБК — although there’s no detailed explanation of their research methodology in their four-page policy brief.

They claim to have also identified around 20 additional accounts which they say possess “similar ‘signature profiles’” to the known sock puppets — but which have not been publicly identified as linked to the Russian troll farm, the Internet Research Agency, or similar Russian-linked units.

While they say a number of the accounts they linked to Russia were established “relatively recently”, others had been in existence for a longer period — with the first appearing to have been set up in 2011, and another cluster in the later part of 2014/early 2015.

The “quality of mimicry” being used by those behind the false accounts makes them “sometimes very convincing and hard to differentiate from the ‘real’ thing”, they go on to assert, further noting: “This is an important aspect of the information dynamics overall, inasmuch as it is not just the spoof accounts pumping out divisive and ideologically freighted communications, they are also engaged in seeking to nudge the impacts and amplify the effects of more genuine messengers.”

‘Genuine messengers’ such as a Nigel Farage — aka one of the UK politicians directly cited in the report as having had messages addressed to him by the fake accounts in the hopes he would then apply Twitter’s retweet function to amplify the divisive messaging. (Farage was leader of UKIP, one of the political parties that campaigned for Brexit and against immigration.)

Far right groups have also used the same technique to spread their own anti-immigration messaging via the medium of president Trump’s tweets — in one recent instance earning the president a rebuke from the UK’s Prime Minister, Theresa May.

Last month May also publicly accused Russia of using social media to “weaponize information” and spread socially divisive fake news on social media, underscoring how the issue has shot to the top of the political agenda this year.

“The involvement of overseas agents in shaping the public impacts of terrorist attacks is more complex and troubling than the journalistic coverage of this story has implied,” the researchers write in their assessment of the topic.

They go on to claim there’s evidence for “interventions” involving a greater volume of fake accounts than has been documented thus far; spanning four of the UK terror attacks that took place earlier this year; that measures were targeted to influence opinions and actions simultaneously across multiple positions on the ideological spectrum; and that activities were not just being engaged by Russian units — but with European and North American right-wing groups also involved.

They note, for example, having found “multiple examples” of spoof accounts trying to “propagate and project very different interpretations of the same events” which were “consistent with their particular assumed identities” — citing how a photo of a Muslim woman walking past the scene of the Westminster bridge attack was appropriate by the fake accounts and used to drive views on either side of the political spectrum:

The use of these accounts as ‘sock puppets’ was perhaps one of the most intriguing aspects of the techniques of influence on display. This involved two of the spoof accounts commenting on the same elements of the terrorist attacks, during roughly the same points in time, adopting opposing standpoints. For example, there was an infamous image of a Muslim woman on Westminster Bridge walking past a victim being treated, apparently ignoring them. This became an internet meme propagated by multiple far-right groups and individuals, with about 7,000 variations of it according to our dataset. In response to which the far right aligned @Ten_GOP tweeted: She is being judged for her own actions & lack of sympathy. Would you just walk by? Or offer help? Whereas, @ Crystal1Johnson’s narrative was: so this is how a world with glasses of hate look like – poor woman, being judged only by her clothes.

The study authors do caveat that as independent researchers it is difficult for them to guarantee ‘beyond reasonable doubt’ that the accounts they identified were Russian-linked fakes — not least because they’ve been deleted (and the study is based off of analysis of digital traceries left from online interactions).

But they also assert that given the difficulties of identifying such sophisticated fakes, there are likely more of them than they were able to spot. For this study, for example, they note that the fake accounts were more likely to have been concerned with American affairs, rather than British or European issues — suggesting more fakes could have flown under the radar because more attention has been directed at trying to identify fake accounts targeting US issues.

A Twitter spokesman declined to comment directly on the research but the company has previously sought to challenge external researchers’ attempts to quantify how information is diffused and amplified on its platform by arguing they do not have the full picture of how Twitter users are exposed to tweets and thus aren’t well positioned to quantify the impact of propaganda-spreading bots.

Specifically it says that safe search and quality filters can erode the discoverability of automated content — and claims these filters are enabled for the vast majority of its users.

Last month, for example, Twitter sought to play down another study that claimed to have found Russian linked accounts sent 45,000 Brexit related tweets in the 48 hours around the UK’s EU in/out referendum vote last year.

The UK’s Electoral Commission is currently looking at whether existing campaign spending rules were broken via activity on digital platforms during the Brexit vote. While a UK parliamentary committee is also running a wider enquiry aiming to articulate the impact of fake news.

Twitter has since provided UK authorities with information on Russian linked accounts that bought paid ads related to Brexit — though not apparently with a fuller analysis of all tweets sent by Russian-linked accounts. Actual paid ads are clearly the tip of the iceberg when there’s no financial barrier to entry to setting up as many fake accounts as you like to tweet out propaganda.

As regards this study, Twitter also argues that researchers with only access to public data are not well positioned to definitively identify sophisticated state-run intelligence agency activity that’s trying to blend in with everyday social networking.

Though the study authors’ view on the challenge of unmasking such skillful sock puppets is they are likely underestimating the presence of hostile foreign agents, rather than overblowing it.

Twitter also provided us with some data on the total number of tweets about three of the attacks in the 24 hours afterwards — saying that for the Westminster attack there were more than 600k tweets; for Manchester there were more than 3.7M; and for the London Bridge attack there were more than 2.6M — and asserting that the intentionally divisive tweets identified in the research represent a tiny fraction (less than 0.01%) of the total tweets sent in the 24 hour period following each attack.

Although the key issue here is influence, not quantity of propaganda per se — and quantifying how opinions might have been skewed by fake accounts is a lot trickier.

But growing awareness of hostile foreign information manipulation taking place on mainstream tech platforms is not likely to be a topic most politicians would be prepared to ignore.

In related news, Twitter today said it will begin enforcing new rules around how it handles hateful conduct and abusive behavior on its platform — as it seeks to grapple with a growing backlash from users angry at its response to harassment and hate speech.

Featured Image: Bryce Durbin/TechCrunch/Getty Images

Twitter says Russians spent ~$1k on six Brexit-related ads


Twitter has disclosed that Russian-backed accounts spent $1,031.99 to buy six Brexit-related ads on its platform during last year’s European Union referendum vote.

The ads in question were purchased during the regulated period for political campaigning in the June 2016 EU Referendum — specifically from 15 April to 23 June 2016.

This nugget of intel into Kremlin political disinformation ops that were centered on the UK’s Brexit vote has been released as part of an ongoing internal investigation by Twitter into possible Russian Brexit meddling — initiated by a request for information from a UK parliamentary committee that’s investigating fake news.

The UK’s Electoral Commission, which oversees domestic election procedure and regulates campaign financing, has also written to social media companies asking them to investigate potential Russian Brexit meddling as part of an ongoing enquiry it’s running into whether the use of digital ads and bots on social media might have broken existing political campaigning rules.

Earlier today Facebook said it had identified three “immigration” ads bought by Russian backed accounts that ran ahead of the Brexit vote — which it says garnered 200 views.

However Facebook’s probe has so far only looked at paid content from Russian accounts. So it’s still not clear how much Brexit-related propaganda was being spread by Russian accounts on the platform given that content can also be freely shared with followers on Facebook.

In the US Kremlin agents were even revealed to have used Facebook’s Events tools to list and orchestrate real-world meet-ups. And in October, Facebook admitted as many as 126 million US Facebook users could have viewed Russian-backed content on its platform.

With Brexit, both Facebook and Twitter have yet to release this sort of ‘full reach’ analysis — so it’s still not possible to quantify the potential impact of Kremlin propaganda on the EU referendum vote.

A Twitter spokesman declined to answer additional questions we put to it, including asking for its analysis of the reach of the six ads — and whether or not it’s also investigating non-paid Russian-backed content (i.e. tweets and bots) around Brexit, not just paid ads.

An academic study last month suggested substantial activity on that front — tracking more than 150,000 Russian accounts that mentioned Brexit and some 45,000 tweets posted in the 48 hours around the vote.

Twitter’s spokesman also declined to share the Russian bought Brexit ads it has identified.

He did provide the following “key points” from Twitter’s letter to Damian Collins MP, chair of the Digital, Culture, Media and Sport Select Committee, which note an earlier decision by the company to ban ads from Russian media firms RT and Sputnik:

In response to the Commission’s request for information concerning Russian-funded campaign activity conducted during the regulated period for the June 2016 EU Referendum (15 April to 23 June 2016), Twitter reviewed referendum-related advertising on our platform during the relevant time period.

Among the accounts that we have previously identified as likely funded from Russian sources, we have thus far identified one account—@RT_com— which promoted referendum-related content during the regulated period. $1,031.99 was spent on six referendum-related ads during the regulated period.

With regard to future activity by Russian-funded accounts, on 26 October 2017, Twitter announced that it would no longer accept advertisements from RT and Sputnik and will donate the $1.9 million that RT had spent globally on advertising on Twitter to academic research into elections and civil engagement. That decision was based on a retrospective review that we initiated in the aftermath of the 2016 U.S. Presidential Elections and following the U.S. intelligence community’s conclusion that both RT and Sputnik have attempted to interfere with the election on behalf of the Russian government. Accordingly, @RT_com will not be eligible to use Twitter’s promoted products in the future.

Featured Image: Bryce Durbin/TechCrunch/Getty Images

Facebook and Twitter to provide Brexit disinformation reports soon


A UK parliamentary committee that’s investing fake news has been told by Facebook and Twitter they will provide information relating to Russian interference during the UK’s 2016 Brexit referendum vote in the coming weeks.

With election disinformation being publicly interrogated in the US, questions have increasingly been asked in the UK about whether foreign government agents also sought to use social channels to drive Brexit propaganda and sway voters.

Last month Damian Collins, the chair of the digital, culture, media and sport committee, wrote to Facebook and Twitter asking them to look into whether Russian-backed accounts had been used to try to influence voters in the June 2016 in/out EU referendum.

The Guardian reports that Collins has also asked senior representatives from the two companies to give evidence on the reach of fake news at the British embassy in Washington in February.

Earlier this month, the UK prime minister cranked up the political pressure by publicly accused the Russian government of seeking to “weaponize information” by planting fake stories and photoshopped images to try to meddle in elections and sow discord in the West.

In a letter sent to Collins on Friday, Twitter confirmed it would be divulging its own findings soon, writing: “We are currently undertaking investigations into these questions and intend to share our findings in the coming weeks.”

Also responding to the committee last week, Facebook noted it had been contacted by the UK’s Electoral Commission about the issue of possible Russian interference in the referendum, as part of enquiries it’s making into whether the use of digital ads and bots on social media broke existing political campaigning rules.

“We are now considering how we can best respond to the Electoral Commission’s request for information and expect to respond to them by the second week of December. Given that your letter is about the same issue, we will share our response to the Electoral Commission with you,” Facebook writes.

We understand that Google has also been asked by the Electoral Commission to provide it with information pertaining to this probe.

Meanwhile, the UK’s data protection watchdog is conducting a parallel investigation into what it describes as “the data-protection risks arising from the use of data analytics, including for political purposes”.

Where Brexit is concerned, it’s not yet clear how significant the impact of political disinformation amplified via social media was to the outcome of the vote. But there clearly was a disinformation campaign of sorts.

And one that prefigured what appears to have been an even more major effort by Kremlin agents to deflect voters in the US presidential election, just a few months later.

After downplaying the impact of ‘fake news’ on the election for months, Facebook recently admitted that Russian-backed content could have reached as many as 126 million US users over the key political period.

Earlier this month it also finally admitted to finding some evidence of Brexit disinformation being spread via its platform. Though it claimed it had not found what it dubbed “significant coordination of ad buys or political misinformation targeting the Brexit vote”.

Meanwhile, research conducted by a group of academics using Twitter’s API to look at how political information diffused on the platform around the Brexit vote — including looking at how bots and human users interacted — has suggested that more than 156,000 Russian accounts mentioned #Brexit.

The researchers also found that Russian accounts posted almost 45,000 messages related to the EU referendum in the 48 hours around the vote (i.e. just before and just after).

While another academic study reckoned to have identified 400 fake Twitter accounts being run by Kremlin trolls.

Twitter has claimed that external studies based on tweet data pulled via its API cannot represent the full picture of how information is diffused on its platform because the data stream does not take account of any quality filters it might also be applying, nor any controls individual users can use to shape the tweets they see.

It reiterates this point in its letter to Collins, writing:

… we have found studies of the impact of bots and automation on Twitter necessarily and systematically underrepresent our enforcement actions because these defensive actions are not visible via our APIs, and because they take place shortly after content is created and delivered via our streaming API.

Furthermore, researchers using an API often overlook the substantial in-product features that prioritize the most relevant content. Based on user interests and choices, we limit the visibility of low-quality content using tools such as Quality Filter and Safe Search — both of which are on by default for all of Twitter’s users and active for more than 97% of users.

It also notes that researchers have not always correctly identified bots — flagging media reports which it claims have “recently highlighted how users named as bots in research were real people, reinforcing the risks of limited data being used to attribute activity, particularly in the absence of peer review”.

Although there have also been media reports of the reverse phenomenon: i.e. Twitter users who were passing themselves off as ‘real people’ (frequently Americans), and accruing lots of retweets, yet who have since been unmasked as Kremlin-controlled disinformation accounts. Such as @SouthLoneStar.

Twitter’s letter ends by seeking to play down the political influence of botnets — quoting the conclusion of a City University report that states “we have not found evidence supporting the notion that bots can substantively alter campaign communication”.

But again, that study would presumably have been based on the partial view of information diffusion on its platform that Twitter has otherwise complained does not represent the full picture (i.e. in order to downplay other studies that have suggested bots were successfully spreading Brexit-related political disinformation).

So really, it can’t have it both ways. (See also: Facebook selling ads on its platform while trying to simultaneously claim the notion that fake news can influence voters is “crazy”.)

In its letter to Collins, Twitter does also say it’s “engaged in dialogue with academics and think tanks around the world, including those in the UK, to discuss potential collaboration and to explore where our own efforts can be better shared without jeopardizing their effectiveness or user privacy”.

And at least now we don’t have too much longer to wait for its official assessment of the role Russian agents using its platform played in Brexit.

Albeit, if Twitter provided full and free access to researchers so that the opinion-influencing impact of its platform could be more robustly studied the company probably still wouldn’t like all the conclusions being drawn. But nor would it so easily be able to downplay them.

Featured Image: Erik Tham/Getty Images