All posts in “Propaganda”

Facebook and Twitter remove accounts spreading fake news ahead of Bangladesh’s elections

Twitter and Facebook announced this morning they’ve removed a combined total of 30 accounts that were working to spread misinformation in Bangladesh, ten days before the country’s general elections. According to Facebook, the company removed nine Facebook Pages and six Facebook accounts that were engaging in “coordinated inauthentic behavior.” Twitter said it removed 15 accounts that were doing the same. Both companies said the accounts had government ties.

“Working with our industry peers we identified and suspended a very small number of accounts originating from Bangladesh for engaging in coordinated platform manipulation,” Twitter explained in a tweet. “Based on our initial analysis, it appears that some of these accounts may have ties to state-sponsored actors,” it said.

Facebook, in a blog post, said it was first alerted to the fake news posts, in part, based on a tip from Graphika, a threat intelligence company it works with. The Facebook Pages in question were designed to look like news outlets, and had posted pro-government and anti-opposition content.

The company also confirmed that the activity was linked to individuals associated with the Bangladesh government.

In some example images Facebook shared, you can see the Pages had been designed to look like BBC’s Bangla news service and the online news site bdnews24.com, among others.

In its own reporting, bdnews24 noted the fake news Page had used an almost identical logo, except that it added an extra letter in the URL and the logo.

Facebook didn’t say how many total followers these Pages and accounts had, but claimed one of the Pages had around 11,9000 people tracking its updates. The network of accounts and Pages had spent around $800 USD on Facebook ads, beginning in July 2017 and continuing through last month.

“We are continuously working to uncover this kind of abuse,” wrote Facebook’s Head of Cybersecurity Policy,  Nathaniel Gleicher. “Today’s announcement of the removal of these Pages is just one of the many steps we have taken to prevent bad actors from misrepresenting themselves to manipulate civic discourse. We will continue to invest heavily in safety and security in order to keep bad actors off of our platform and provide a place for people to connect meaningfully about the things that matter to them.” he said.

Twitter, meanwhile, said its investigations are still ongoing and its enforcement actions may expand later on.

For now, however, it has taken action on a total of 15 accounts, all of which had a very small number of followers. Most of the accounts had under 50 followers, it noted. Twitter said it will release more information about the accounts when the investigation completes, as it has before.

Tool up for the midterms with this Facebook junk news aggregator

With the US midterms fast approaching purveyors of online disinformation are very busy indeed spreading their hyper-partisan junk on Facebook .

Their goal: Skewing democratic outcomes by putting out misleading, deceptive or incorrect information that’s packaged as real news about politics, economics or culture — yet presented in a way that panders to prejudices and is more likely to get virally spread on mainstream social media platforms where it has the chance to influence people’s views.

This has happened before; is still happening; and will keep on happening unless or until social media platforms get properly regulated.

In the meanwhile, what’s to be done? Arming yourselves and your friends with smart digital and news literacy tools to help shine a light on the kind of ridiculously over-inflated political nonsense that’s being passed around on all sides (albeit, not necessarily equally) seems like a good place to start.

Step forward, Oxford University’s Oxford Internet Institute (OII), which has just launched an aggregator tool which tracks what it terms “junk” political views being shared on Facebook — doing so in near real-time and offering various ways to visualize and explore the junk heap.

What’s “junk news” in this context? The OII says this type of political content can include “ideologically extreme, hyper-partisan, or conspiratorial news and information, as well as various forms of propaganda”.

This sort of stuff might elsewhere get badged ‘fake news’, although that label is problematical — and has itself been hijacked by known muck spreaders. (So ‘online disinformation’ tends to be the label of choice in academic and policy circles, these days.)

The OII is here using its own political propaganda content categorization — i.e. this term “junk news” — which is based on what it describes as “a grounded typology” derived through analyzing a large amount of political communications shared by US social media users.

Specifically it’s based on an analysis of 21.8 million tweets sent during the 2016 Presidential campaign period up til the 2018 State of the Union Address in the United States — applying what the Institute dubs “rigorous coding and content analysis techniques to define the new phenomenon”.

This involved labelling the source websites of shared links based on “a grounded typology that has been tested over several elections around the world in 2016-2018”, with a content source getting coded as a purveyor of junk news if it failed on 3 out of 5 of criteria of the typology.

(Examples of sources that are being judged junk via this method include the likes of Breitbart, Dailycaller and Dailywire to name just a few.)

Now to the tool itself:

The Visual Junk News Aggregator does what it says on the tin, aggregating popular junk news posts into a bipartisan thumbnail wall of over-inflated (or just out and out) BS.

Complete with a trigger warning for the risk of graphic images and language. Mousing over the thumbnails brings up any title and description that’s been scraped for the post in question, plus a date stamp and full Facebook reaction data.

Another tool — the Top 10 Junk News Aggregator — shows the most engaged with English language junk news stories posted to Facebook in the last 24 hours, in the context of the 2018 US midterm elections. (With engagement being based on total Facebook reactions per second of the post’s life.)

While the full aggregator tool supports keyword searches of the junk heap (by content and/or publisher), and also by time — allowing for sifting of junk posts published to public Facebook pages as recently as the last hour or up to a full month old.

Returned search results can be further sorted by time and reaction — across all eight types of possible Facebook reactions.

“The Junk News Aggregator is an interactive tool for exploring junk news stories posted on Facebook, particularly useful right now in the lead-up to the US midterms,” the Institute writes. “It is a  unique tool for systematically studying misinformation on Facebook in real time. It make visible the depth of the junk news problem, displaying the quantity and the content of junk news, as well as the levels of engagement with it.

“Junk news content can be sorted by time and by engagement numbers, as well as via keyword search (such as for a candidate, district, or specific issue). It also offers a visual overview and a top-10 snapshot of the day’s most engaged-with junk news.

“Our goal is to help shed light on the problem of junk news on social media, to make this issue more transparent, and to help improve the public’s media literacy. It also aims to help journalists, researchers, policy-makers, and social media platforms understand the impact of junk news on public life.”

It sent us a case study example to help demonstrate the “functionality and usefulness” of the tool (based on a search it conducted at 11:00am GMT, October 31, 2018).

For this example it used the search keyword “caravan”, selecting posts from the last day and filtering for the most shared posts — which served up several posts.

The most shared post was this one, below, from junk news source Chicks on the Right:

The Institute doesn’t make any comment on why it chose to track junk news on Facebook, specifically, vs other social media platforms (e.g. Twitter) — though there’s little doubt that Facebook’s platform remains the kingpin where skewing political views is concerned, given its massive user-base.

Meanwhile the company’s ongoing attempts to dampen the virality of democracy-denting junk shared on its platform continue — and continue to yield underwhelming results, given the size and gravity of the problem.

Also unconvincing: Facebook’s extremely recent attempts to install systems that verify the actual identity of political advertisers on its platform. Yet these self-imposed checks look to be off to a terrible start — as Facebook has just been shown hosting (and spreading) yet more fake information… ouch…

Putting your faith in Facebook to sort its shit out on the political front — and fast — looks about as sensible as trusting your pet turtle to a shark to babysit.

Much better to tool up and seek to stay on top of the junk heap yourself — at least until the world’s political representatives sort their shit out and get a proper handle on regulating social media.

In the meanwhile, don’t forget to vote.

Tech and ad giants sign up to Europe’s first weak bite at ‘fake news’

The European Union’s executive body has signed up tech platforms and ad industry players to a voluntary  Code of Practice aimed at trying to do something about the spread of disinformation online.

Something, just not anything too specifically quantifiable.

According to the Commission, Facebook, Google, Twitter, Mozilla, some additional members of the EDIMA trade association, plus unnamed advertising groups are among those that have signed up to the self-regulatory code, which will apply in a month’s time.

Signatories have committed to taking not exactly prescribed actions in the following five areas:

  • Disrupting advertising revenues of certain accounts and websites that spread disinformation;
  • Making political advertising and issue based advertising more transparent;
  • Addressing the issue of fake accounts and online bots;
  • Empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;
  • Empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.

Mariya Gabriel, the European commissioner for digital economy and society, described the Code as a first “important” step in tackling disinformation. And one she said will be reviewed by the end of the year to see how (or, well, whether) it’s functioning, with the door left open for additional steps to be taken if not. So in theory legislation remains a future possibility.

“This is the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis,” she said in a statement. “The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation, and we welcome this.

“These actions should contribute to a fast and measurable reduction of online disinformation. To this end, the Commission will pay particular attention to its effective implementation.”

“I urge online platforms and the advertising industry to immediately start implementing the actions agreed in the Code of Practice to achieve significant progress and measurable results in the coming months,” she added. “I also expect more and more online platforms, advertising companies and advertisers to adhere to the Code of Practice, and I encourage everyone to make their utmost to put their commitments into practice to fight disinformation.”

Earlier this year a report by an expert group established by the Commission to help shape its response to the so-called ‘fake news’ crisis, called for more transparency from online platform, as well as urgent investment in media and information literacy education to empower journalists and foster a diverse and sustainable news media ecosystem.

Safe to say, no one has suggested there’s any kind of quick fix for the Internet enabling the accelerated spread of nonsense and lies.

Including the Commission’s own expert group, which offered an assorted pick’n’mix of ideas — set over various and some not-at-all-instant-fix timeframes.

Though the group was called out for failing to interrogate evidence around the role of behavioral advertising in the dissemination of fake news — which has arguably been piling up. (Certainly its potential to act as a disinformation nexus has been amply illustrated by the Facebook-Cambridge Analytica data misuse scandal, to name one recent example.)

The Commission is not doing any better on that front, either.

The executive has been working on formulating its response to what its expert group suggested should be referred to as ‘disinformation’ (i.e. rather than the politicized ‘fake news’ moniker) for more than a year now — after the European parliament adopted a Resolution, in June 2017, calling on it to examine the issue and look at existing laws and possible legislative interventions.

Elections for the European parliament are due next spring and MEPs are clearly concerned about the risk of interference. So the unelected Commission is feeling the elected parliament’s push here.

Disinformation — aka “verifiably false or misleading information” created and spread for economic gain and/or to deceive the public, and which “may cause public harm” such as “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”, as the Commission’s new Code of Practice defines it — is clearly a slippery policy target.

And online multiple players are implicated and involved in its spread. 

But so too are multiple, powerful, well resourced adtech players incentivized to push to avoid any political disruption to their lucrative people-targeting business models.

In the Commission’s voluntary Code of Practice signatories merely commit to recognizing their role in “contributing to solutions to the challenge posed by disinformation”. 

“The Signatories recognise and agree with the Commission’s conclusions that “the exposure of citizens to large scale Disinformation, including misleading or outright false information, is a major challenge for Europe. Our open democratic societies depend on public debates that allow well-informed citizens to express their will through free and fair political processes,” runs the preamble.

“[T]he Signatories are mindful of the fundamental right to freedom of expression and to an open Internet, and the delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike.

“In recognition that the dissemination of Disinformation has many facets and is facilitated by and impacts a very broad segment of actors in the ecosystem, all stakeholders have roles to play in countering the spread of Disinformation.”

“Misleading advertising” is explicitly excluded from the scope of the code — which also presumably helped the Commission convince the ad industry to sign up to it.

Though that further risks muddying the waters of the effort, given that social media advertising has been the high-powered vehicle of choice for malicious misinformation muck-spreaders (such as Kremlin-backed agents of societal division).

The Commission is presumably trying to split the hairs of maliciously misleading fake ads (still bad because they’re not actually ads but malicious pretenders) and good old fashioned ‘misleading advertising’, though — which will continue to be dealt with under existing ad codes and standards.

Also excluded from the Code: “Clearly identified partisan news and commentary”. So purveyors of hyper biased political commentary are not intended to get scooped up here, either. 

Though again, plenty of Kremlin-generated disinformation agents have masqueraded as partisan news and commentary pundits, and from all sides of the political spectrum.

Hence, we must again assume, the Commission including the requirement to exclude this type of content where it’s “clearly identified”. Whatever that means.

Among the various ‘commitments’ tech giants and ad firms are agreeing to here are plenty of firmly fudgey sounding statements that call for a degree of effort from the undersigned. But without ever setting out explicitly how such effort will be measured or quantified.

For e.g.

  • The Signatories recognise that all parties involved in the buying and selling of online advertising and the provision of advertising-related services need to work together to improve transparency across the online advertising ecosystem and thereby to effectively scrutinise, control and limit the placement of advertising on accounts and websites belonging to purveyors of Disinformation.

Or

  • Relevant Signatories commit to use reasonable efforts towards devising approaches to publicly disclose “issue-based advertising”. Such efforts will include the development of a working definition of “issue-based advertising” which does not limit reporting on political discussion and the publishing of political opinion and excludes commercial

And

  • Relevant Signatories commit to invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest.

Nor does the code exactly nail down the terms it’s using to set goals — raising tricky and even existential questions like who defines what’s “relevant, authentic, and authoritative” where information is concerned?

Which is really the core of the disinformation problem.

And also not an easy question for tech giants — which have sold their vast content distribution farms as neutral ‘platforms’ — to start to approach, let alone tackle. Hence their leaning so heavily on third party fact-checkers to try to outsource their lack of any editorial values. Because without editorial values there’s no compass; and without a compass how can you judge the direction of tonal travel?

And so we end up with very vague suggestions in the code like:

  • Relevant Signatories should invest in technological means to prioritize relevant, authentic, and authoritative information where appropriate in search, feeds, or other automatically ranked distribution channels

Only slightly less vague and woolly is a commitment that signatories will “put in place clear policies regarding identity and the misuse of automated bots” on the signatories’ services, and “enforce these policies within the EU”. (So presumably not globally, despite disinformation being able to wreak havoc everywhere.)

Though here the code only points to some suggestive measures that could be used to do that — and which are set out in a separate annex. This boils down to a list of some very, very broad-brush “best practice principles” (such as “follow the money”; develop “solutions to increase transparency”; and “encourage research into disinformation”… ).

And set alongside that uninspiringly obvious list is another — of some current policy steps being undertaken by the undersigned to combat fake accounts and content — as if they’re already meeting the code’s expectations… so, er…

Unsurprisingly, the Commission’s first bite at ‘fake news’ has attracted some biting criticism for being unmeasurably weak sauce.

A group of media advisors — including the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics — are among the first critics.

Reuters reports them complaining that signatories have not offered measurable objectives to monitor the implementation. “The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” it quotes the group as saying.

Disinformation may be a tough, multi-pronged, multi-dimensional problem but few would try to argue that an overly dilute solution will deliver anything at all — well, unless it’s kicking the can down the road that you’re really after.

The Commission doesn’t even seem to know exactly what the undersigned have agreed to do as a first step, with the commissioner saying she’ll meet signatories “in the coming weeks to discuss the specific procedures and policies that they are adopting to make the Code a reality”. So double er… !

The code also only envisages signatories meeting annually to discuss how things are going. So no pressure for regular collaborative moots vis-a-vis tackling things like botnets spreading malicious disinformation then. Not unless the undersigned really, really want to.

Which seems unlikely, given how their business models tend to benefit from engagement — and disinformation-fuelled outrage has shown itself to be a very potent fuel on that front.

As part of the code, these adtech giants have at least technically agreed to make information available to the Commission on request — and generally to co-operate with its efforts to assess how/whether the code is working.

So, if public pressure on the issue continues to ramp up, the Commission does at least have a route to ask for relevant data from platforms that could, in theory, be used to feed a regulation that’s worth the paper it’s written on.

Until then, there’s nothing much to see here.

Tumblr confirms 84 accounts linked to Kremlin trolls

Tumblr has confirmed that Kremlin trolls were active on its platform during the 2016 US presidential elections.

In a blog post today the social platform writes that it is “taking steps to protect against future interference in our political conversation by state-sponsored propaganda campaigns”.

The company has also started emailing users who interacted with 84 accounts it now says it has linked to the Russian trollfarm, the Internet Research Agency (IRA).

In the blog post it says it identified the accounts last fall — and “notified law enforcement, terminated the accounts, and deleted their original posts”.

“Behind the scenes, we worked with the Department of Justice, and the information we provided helped indict 13 people who worked for the IRA,” it adds.

In an email sent to a user, which was passed to TechCrunch to review, the company informs the individual they “either followed one of [11] accounts linked to the IRA, or liked or reblogged one of their posts”.

“As part of our commitment to transparency, we want you to know that we uncovered and terminated 84 accounts linked to Internet Research Agency or IRA (a group closely tied to the the Russian government) posing as members of the Tumblr community,” the email begins.

“The IRA engages in electronic disinformation and propaganda campaigns around the world using phony social media accounts. When we uncovered these accounts, we notified law enforcement, terminated the accounts, and deleted their original posts.”

Last month Buzzfeed News — working with researcher, Jonathan Albright, from the Tow Center for Digital Journalism at Columbia University — claimed to have unearthed substantial Kremlin troll activity on Tumblr’s meme-laden platform — identifying what they dubbed as “a powerful, largely unrevealed network of Russian trolls focused on black issues and activism” which they said dated back to early 2015.

The trolls were reported to be using Tumblr to push anti-Clinton messages, including by actively promoting Democrat rival Bernie Sanders.

Decrying racial injustice and police violence in the US was another theme of the Russian-linked content.

Since then The Daily Beast has also reported on leaked data from the IRA which also implied agents at the trollfarm had used Tumblr — and also Reddit — to spread political propaganda to target the 2016 US election.

Those IRA leaks suggested the IRA had created at least 21 Tumblr accounts — and included names replete with slang terms, including some accounts listed in the user email we’ve reviewed.

Tumblr, which is owned by TechCrunch’s parent company Oath, did not respond to an email we sent to their press office last month asking about possible Kremlin activity on its platform.

In today’s public post, the company writes: “As far as we can tell, the IRA-linked accounts were only focused on spreading disinformation in the U.S., and they only posted organic content. We didn’t find any indication that they ran ads.”

As well as emailing affected users, Tumblr says it will be keeping a public record of usernames linked to the IRA or “other state-sponsored disinformation campaigns”.

The full list of 84 Kremlin accounts on its public page is as follows:

It also suggests users step in and “correct the record” when they see others spreading misinformation, regardless of whether they believe it’s being done intentionally or not.

Concluding its email to the user who had unwittingly engaged with 11 of the identified IRA accounts, Tumblr adds: “We deleted the accounts but decided to leave up any reblog chains so that you can curate your own Tumblr to reflect your own personal views and perspectives.

“Democracy requires transparency and an informed electorate and we take our disclosure responsibility very seriously. We’ll be aggressively watching for disinformation campaigns in the future, take the appropriate action, and make sure you know about it.”

Asked how he feels to learn Kremlin trolls had unknowingly infiltrated his Tumblr feeds, the user told us: “It’s unsettling, although maybe not surprising, that we legitimize and signal boost bad actors on social platforms by ‘liking’ or reposting content that doesn’t appear to have any political agenda at first glance.”

Fake news is an existential crisis for social media 


The funny thing about fake news is how mind-numbingly boring it can be. Not the fakes themselves — they’re constructed to be catnip clickbait to stoke the fires of rage of their intended targets. Be they gun owners. People of color. Racists. Republican voters. And so on.

The really tedious stuff is all the also incomplete, equally self-serving pronouncements that surround ‘fake news’. Some very visibly, a lot a lot less so.

Such as Russia painting the election interference narrative as a “fantasy” or a “fairytale” — even now, when presented with a 37-page indictment detailing what Kremlin agents got up to (including on US soil). Or Trump continuing to bluster that Russian-generated fake news is itself “fake news”.

And, indeed, the social media firms themselves, whose platforms have been the unwitting conduits for lots of this stuff, shaping the data they release about it — in what can look suspiciously like an attempt to downplay the significance and impact of malicious digital propaganda, because, well, that spin serves their interests.

The claim and counter claim that spread out around ‘fake news’ like an amorphous cloud of meta-fakery, as reams of additional ‘information’ — some of it equally polarizing but a lot of it more subtle in its attempts to mislead (for e.g., the publicly unseen ‘on background’ info routinely sent to reporters to try to invisible shape coverage in a tech firm’s favor) — are applied in equal and opposite directions in the interests of obfuscation; using speech and/or misinformation as a form of censorship to fog the lens of public opinion.

This bottomless follow-up fodder generates yet more FUD in the fake news debate. Which is ironic, as well as boring, of course. But it’s also clearly deliberate.

As Zeynep Tufekci has eloquently argued: “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”

So we also get subjected to all this intentional padding, applied selectively, to defuse debate and derail clear lines of argument; to encourage confusion and apathy; to shift blame and buy time. Bored people are less likely to call their political representatives to complain.

Truly fake news is the inception layer cake that never stops being baked. Because pouring FUD onto an already polarized debate — and seeking to shift what are by nature shifty sands (after all information, misinformation and disinformation can be relative concepts, depending on your personal perspective/prejudices) — makes it hard for any outsider to nail this gelatinous fakery to the wall.

Why would social media platforms want to participate in this FUDing? Because it’s in their business interests not to be identified as the primary conduit for democracy damaging disinformation.

And because they’re terrified of being regulated on account of the content they serve. They absolutely do not want to be treated as the digital equivalents to traditional media outlets.

But the stakes are high indeed when democracy and the rule of law are on the line. And by failing to be pro-active about the existential threat posed by digitally accelerated disinformation, social media platforms have unwittingly made the case for external regulation of their global information-shaping and distribution platforms louder and more compelling than ever.

*

Every gun outrage in America is now routinely followed by a flood of Russian-linked Twitter bot activity. Exacerbating social division is the name of this game. And it’s playing out all over social media continually, not just around elections.

In the case of Russian digital meddling connected to the UK’s 2016 Brexit referendum, which we now know for sure existed — still without having all of the data we need to quantify the actual impact, the chairman of a UK parliamentary committee that’s running an enquiry into fake news has accused both Twitter and Facebook of essentially ignoring requests for data and help, and doing none of the work the committee asked of them.

Facebook has since said it will take a more thorough look through its archives. And Twitter has drip-fed some tidbits of additional infomation. But more than a year and a half after the vote itself, many, many questions remain.

And just this week another third party study suggested that the impact of Russian Brexit trolling was far larger than has been so far conceded by the two social media firms.

The PR company that carried out this research included in its report a long list of outstanding questions for Facebook and Twitter.

Here they are:

  • How much did [Russian-backed media outlets] RT, Sputnik and Ruptly spend on advertising on your platforms in the six months before the referendum in 2016?
  • How much have these media platforms spent to build their social followings?
  • Sputnik has no active Facebook page, but has a significant number of Facebook shares for anti-EU content, does Sputnik have an active Facebook advertising account?
  • Will Facebook and Twitter check the dissemination of content from these sites to check they are not using bots to push their content?
  • Did either RT, Sputnik or Ruptly use ‘dark posts’ on either Facebook or Twitter to push their content during the EU referendum, or have they used ‘dark posts’ to build their extensive social media following?
  • What processes do Facebook or Twitter have in place when accepting advertising from media outlets or state owned corporations from autocratic or authoritarian countries? Noting that Twitter no longer takes advertising from either RT or Sputnik.
  • Did any representatives of Facebook or Twitter pro-actively engage with RT or Sputnik to sell inventory, products or services on the two platforms in the period before 23 June 2016?

We put these questions to Facebook and Twitter.

In response, a Twitter spokeswoman pointed us to some “key points” from a previous letter it sent to the DCMS committee (emphasis hers):

In response to the Commission’s request for information concerning Russian-funded campaign activity conducted during the regulated period for the June 2016 EU Referendum (15 April to 23 June 2016), Twitter reviewed referendum-related advertising on our platform during the relevant time period. 

Among the accounts that we have previously identified as likely funded from Russian sources, we have thus far identified one account—@RT_com— which promoted referendum-related content during the regulated period. $1,031.99 was spent on six referendum-related ads during the regulated period 

With regard to future activity by Russian-funded accounts, on 26 October 2017, Twitter announced that it would no longer accept advertisements from RT and Sputnik and will donate the $1.9 million that RT had spent globally on advertising on Twitter to academic research into elections and civil engagement. That decision was based on a retrospective review that we initiated in the aftermath of the 2016 U.S. Presidential Elections and following the U.S. intelligence community’s conclusion that both RT and Sputnik have attempted to interfere with the election on behalf of the Russian government. Accordingly, @RT_com will not be eligible to use Twitter’s promoted products in the future.

The Twitter spokeswoman declined to provide any new on-the-record information in response to the specific questions.

A Facebook representative first asked to see the full study, which we sent, then failed to provide a response to the questions at all.

The PR firm behind the research, 89up, makes this particular study fairly easy for them to ignore. It’s a pro-Remain organization. The research was not undertaken by a group of impartial university academics. The study isn’t peer reviewed, and so on.

But, in an illustrative twist, if you Google “89up Brexit”, Google New injects fresh Kremlin-backed opinions into the search results it delivers — see the top and third result here…


Clearly, there’s no such thing as ‘bad propaganda’ if you’re a Kremlin disinformation node.

Even a study decrying Russian election meddling presents an opportunity for respinning and generating yet more FUD — in this instance by calling 89up biased because it supported the UK staying in the EU. Making it easy for Russian state organs to slur the research as worthless.

The social media firms aren’t making that point in public. They don’t have to. That argument is being made for them by an entity whose former brand name was literally ‘Russia Today’. Fake news thrives on shamelessness, clearly.

It also very clearly thrives in the limbo of fuzzy accountability where politicians and journalists essentially have to scream at social media firms until blue in the face to get even partial answers to perfectly reasonable questions.

Frankly, this situation is looking increasingly unsustainable.

Not least because governments are cottoning on — some are setting up departments to monitor malicious disinformation and even drafting anti-fake news election laws.

And while the social media firms have been a bit more alacritous to respond to domestic lawmakers’ requests for action and investigation into political disinformation, that just makes their wider inaction, when viable and reasonable concerns are brought to them by non-US politicians and other concerned individuals, all the more inexcusable.

The user-bases of Facebook, Twitter and YouTube are global. Their businesses generate revenue globally. And the societal impacts from maliciously minded content distributed on their platforms can be very keenly felt outside the US too.

But if tech giants have treated requests for information and help about political disinformation from the UK — a close US ally — so poorly, you can imagine how unresponsive and/or unreachable these companies are to further flung nations, with fewer or zero ties to the homeland.

Earlier this month, in what looked very much like an act of exasperation, the chair of the UK’s fake news enquiry, Damian Collins, flew his committee over the Atlantic to question Facebook, Twitter and Google policy staffers in an evidence session in Washington.

None of the companies sent their CEOs to face the committee’s questions. None provided a substantial amount of new information. The full impact of Russia’s meddling in the Brexit vote remains unquantified.

One problem is fake news. The other problem is the lack of incentive for social media companies to robustly investigate fake news.

*

The partial data about Russia’s Brexit dis-ops, which Facebook and Twitter have trickled out so far, like blood from the proverbial stone, is unhelpful exactly because it cannot clear the matter up either way. It just introduces more FUD, more fuzz, more opportunities for purveyors of fake news to churn out more maliciously minded content, as RT and Sputnik demonstrably have.

In all probability, it also pours more fuel on Brexit-based societal division. The UK, like the US, has become a very visibly divided society since the narrow 52: 48 vote to leave the EU. What role did social media and Kremlin agents play in exacerbating those divisions? Without hard data it’s very difficult to say.

But, at the end of the day, it doesn’t matter whether 89up’s study is accurate or overblown; what really matters is no one except the Kremlin and the social media firms themselves are in a position to judge.

And no one in their right mind would now suggest we swallow Russia’s line that so called fake news is a fiction sicked up by over-imaginative Russophobes.

But social media firms also cannot be trusted to truth tell on this topic, because their business interests have demonstrably guided their actions towards equivocation and obfuscation.

Self interest also compellingly explains how poorly they have handled this problem to date; and why they continue — even now — to impede investigations by not disclosing enough data and/or failing to interrogate deeply enough their own systems when asked to respond to reasonable data requests.

A game of ‘uncertain claim vs self-interested counter claim’, as competing interests duke it out to try to land a knock-out blow in the game of ‘fake news and/or total fiction’, serves no useful purpose in a civilized society. It’s just more FUD for the fake news mill.

Especially as this stuff really isn’t rocket science. Human nature is human nature. And disinformation has been shown to have a more potent influencing impact than truthful information when the two are presented side by side. (As they frequently are by and on social media platforms.) So you could do robust math on fake news — if only you had access to the underlying data.

But only the social media platforms have that. And they’re not falling over themselves to share it. Instead, Twitter routinely rubbishes third party studies exactly because external researchers don’t have full visibility into how its systems shape and distribute content.

Yet external researchers don’t have that visibility because Twitter prevents them from seeing how it shapes tweet flow. Therein lies the rub.

Yes, some of the platforms in the disinformation firing line have taken some preventative actions since this issue blew up so spectacularly, back in 2016. Often by shifting the burden of identification to unpaid third parties (fact checkers).

Facebook has also built some anti-fake news tools to try to tweak what its algorithms favor, though nothing it’s done on that front to date looks very successfully (even as a more major change to its New Feed, to make it less of a news feed, has had a unilateral and damaging impact on the visibility of genuine news organizations’ content — so is arguably going to be unhelpful in reducing Facebook-fueled disinformation).

In another instance, Facebook’s mass closing of what it described as “fake accounts” ahead of, for example, the UK and French elections can also look problematic, in democratic terms, because we don’t fully know how it identified the particular “tens of thousands” of accounts to close. Nor what content they had been sharing prior to this. Nor why it hadn’t closed them before if they were indeed Kremlin disinformation-spreading bots.

More recently, Facebook has said it will implement a disclosure system for political ads, including posting a snail mail postcard to entities wishing to pay for political advertising on its platform — to try to verify they are indeed located in the territory they say they are.

Yet its own VP of ads has admitted that Russian efforts to spread propaganda are ongoing and persistent, and do not solely target elections or politicians…

The wider point is that social division is itself a tool for impacting democracy and elections — so if you want to achieve ongoing political meddling that’s the game you play.

You don’t just fire up your disinformation guns ahead of a particular election. You work to worry away at society’s weak points continuously to fray tempers and raise tensions.

Elections don’t take place in a vacuum. And if people are angry and divided in their daily lives then that will naturally be reflected in the choices made at the ballot box, whenever there’s an election.

Russia knows this. And that’s why the Kremlin has been playing such a long propaganda game. Why it’s not just targeting elections. Its targets are fault lines in the fabric of society — be it gun control vs gun owners or conservatives vs liberals or people of color vs white supremacists — whatever issues it can seize on to stir up trouble and rip away at the social fabric.

That’s what makes digitally amplified disinformation an existential threat to democracy and to civilized societies. Nothing on this scale has been possible before.

And it’s thanks, in great part, to the reach and power of social media platforms that this game is being played so effectively — because these platforms have historically preferred to champion free speech rather than root out and eradicate hate speech and abuse; inviting trolls and malicious actors to exploit the freedom afforded by their free speech ideology and to turn powerful broadcast and information-targeting platforms into cyberweapons that blast the free societies that created them.

Social media’s filtering and sorting algorithms also crucially failed to make any distinction between information and disinformation. Which was their great existential error of judgement, as they sought to eschew editorial responsibility while simultaneously working to dominate and crush traditional media outlets which do operate within a more tightly regulated environment (and, at least in some instances, have a civic mission to truthfully inform).

Publishers have their own biases too, of course, but those biases tend to be writ large — vs social media platforms’ faux claims of neutrality when in fact their profit-seeking algorithms have been repeatedly caught preferring (and thus amplifying) dis- and misinformation over and above truthful but less clickable content.

But if your platform treats everything and almost anything indiscriminately as ‘content’, then don’t be surprised if fake news becomes indistinguishable from the genuine article because you’ve built a system that allows sewage and potable water to flow through the same distribution pipe.

So it’s interesting to see Goldman’s suggested answer to social media’s existential fake news problem attempting, even now, to deflect blame — by arguing that the US education system should take on the burden of arming citizens to deconstruct all the dubious nonsense that social media platforms are piping into people’s eyeballs.

Lessons in critical thinking are certainly a good idea. But fakes are compelling for a reason. Look at the tenacity with which conspiracy theories take hold in the US. In short, it would take a very long time and a very large investment in critical thinking education programs to create any kind of shielding intellectual capacity able to protect the population at large from being fooled by maliciously crafted fakes.

Indeed, human nature actively works against critical thinking. Fakes are more compelling, more clickable than the real thing. And thanks to technology’s increasing potency, fakes are getting more sophisticated, which means they will be increasingly plausible — and get even more difficult to distinguish from the truth. Left unchecked, this problem is going to get existentially worse too.

So, no, education can’t fix this on its own. And for Facebook to try to imply it can is yet more misdirection and blame shifting.

*

If you’re the target of malicious propaganda you’ll very likely find the content compelling because the message is crafted with your specific likes and dislikes in mind. Imagine, for example, your trigger reaction to being sent a deepfake of your wife in bed with your best friend.

That’s what makes this incarnation of propaganda so potent and insidious vs other forms of malicious disinformation (of course propaganda has a very long history — but never in human history have we had such powerful media distribution platforms that are simultaneously global in reach and capable of delivering individually targeted propaganda campaigns. That’s the crux of the shift here).

Fake news is also insidious because of the lack of civic restrains on disinformation agents, which makes maliciously minded fake news so much more potent and problematic than plain old digital advertising.

I mean, even people who’ve searched for ‘slippers’ online an awful lot of times, because they really love buying slippers, are probably only in the market for buying one or two pairs a year — no matter how many adverts for slippers Facebook serves them. They’re also probably unlikely to actively evangelize their slipper preferences to their friends, family and wider society — by, for example, posting about their slipper-based views on their social media feeds and/or engaging in slipper-based discussions around the dinner table or even attending pro-slipper rallies.

And even if they did, they’d have to be a very charismatic individual indeed to generate much interest and influence. Because, well, slippers are boring. They’re not a polarizing product. There aren’t tribes of slipper owners as there are smartphone buyers. Because slippers are a non-complex, functional comfort item with minimal fashion impact. So an individual’s slipper preferences, even if very liberally put about on social media, are unlikely to generate strong opinions or reactions either way.

Political opinions and political positions are another matter. They are frequently what define us as individuals. They are also what can divide us as a society, sadly.

To put it another way, political opinions are not slippers. People rarely try a new one on for size. Yet social media firms spent a very long time indeed trying to sell the ludicrous fallacy that content about slippers and maliciously crafted political propaganda, mass-targeted tracelessly and inexpensively via their digital ad platforms, was essentially the same stuff. See: Zuckerberg’s infamous “pretty crazy idea” comment, for example.

Indeed, look back over the last few years’ news about fake news, and social media platforms have demonstrably sought to play down the idea that the content distributed via their platforms might have had any sort of quantifiable impact on the democratic process at all.

Yet these are the same firms that make money — very large amounts of money, in some cases — by selling their capability to influentially target advertising.

So they have essentially tried to claim that it’s only when foreign entities engage with their digital advertising platforms, and used their digital advertising tools — not to sell slippers or a Netflix subscription but to press people’s biases and prejudices in order to sew social division and impact democratic outcomes — that, all of a sudden, these powerful tech tools cease to function.

And we’re supposed to take it on trust from the same self-interested companies that the unknown quantity of malicious ads being fenced on their platforms is but a teeny tiny drop in the overall content ocean they’re serving up so hey why can’t you just stop overreacting?

That’s also pure misdirection of course. The wider problem with malicious disinformation is it pervades all content on these platforms. Malicious paid-for ads are just the tip of the iceberg.

So sure, the Kremlin didn’t spend very much money paying Twitter and Facebook for Brexit ads — because it didn’t need to. It could (and did) freely set up ranks of bot accounts on their platforms to tweet and share content created by RT, for example — frequently skewed towards promoting the Leave campaign, according to multiple third party studies — amplifying the reach and impact of its digital propaganda without having to send the tech firms any more checks.

And indeed, Russia is still operating ranks of bots on social media which are actively working to divide public opinion, as Facebook freely admits.

Maliciously minded content has also been shown to be preferred by (for example) Facebook’s or Google’s algorithms vs truthful content, because their systems have been tuned to what’s most clickable and shareable and can also be all too easily gamed.

And, despite their ongoing techie efforts to fix what they view as some kind of content-sorting problem, their algorithms continue to get caught and called out for promoting dubious stuff.

Thing is, this kind of dynamic, contextual judgement is very hard for AI — as Zuckerberg himself has conceded. But human review is unthinkable. Tech giants simply do not want to employ the numbers of humans that would be necessary to always be making the right editorial call on each and every piece of digital content.

If they did, they’d instantly become the largest media organizations in the world — needing at least hundreds of thousands (if not millions) of trained journalists to serve every market and local region they cover.

They would also instantly invite regulation as publishers — ergo, back to the regulatory nightmare they’re so desperate to avoid.

All of this is why fake news is an existential problem for social media.

And why Zuckerberg’s 2018 yearly challenge will be his toughest ever.

Little wonder, then, that these firms are now so fixed on trying to narrow the debate and concern to focus specifically on political advertising. Rather than malicious content in general.

Because if you sit and think about the full scope of malicious disinformation, coupled with the automated global distribution platforms that social media has become, it soon becomes clear this problem scales as big and wide as the platforms themselves.

And at that point only two solutions look viable:

A) bespoke regulation, including regulatory access to proprietary algorithmic content-sorting engines.

B) breaking up big tech so none of these platforms have the reach and power to enable mass-manipulation.

The threat posed by info-cyberwarfare on tech platforms that straddle entire societies and have become attention-sapping powerhouses — swapping out editorially structured news distribution for machine-powered content hierarchies that lack any kind of civic mission — is really only just beginning to become clear, as the detail of abuses and misuses slowly emerges. And as certain damages are felt.

Facebook’s user base is a staggering two billion+ at this point — way bigger than the population of the world’s most populous country, China. Google’s YouTube has over a billion users. Which the company points out amounts to more than a third of the entire user-base of the Internet.

What does this seismic shift in media distribution and consumption mean for societies and democracies? We can hazard guesses but we’re not in a position to know without much better access to tightly guarded, commercially controlled information streams.

Really, the case for social media regulation is starting to look unstoppable.

But even with unfettered access to internal data and the potential to control content-sifting engines, how do you fix a problem that scales so very big and broad?

Regulating such massive, global platforms would clearly not be easy. In some countries Facebook is so dominant it essentially is the Internet.

So, again, this problem looks existential. And Zuck’s 2018 challenge is more Sisyphean than Herculean.

And it might well be that competition concerns are not the only trigger-call for big tech to get broken up this year.

Featured Image: Quinn Dombrowski/Flickr UNDER A CC BY-SA 2.0 LICENSE