All posts in “terrorism”

Europe keeps up the pressure on social media over illegal content takedowns


The European Union’s executive body is continuing to pressure social media firms to get better at removing illegal content from their platforms before it has a chance to spread further online.

Currently there is a voluntary Code of Conduct on countering illegal online hate speech across the European Union. But the Commission has previously indicated it could seek to legislate if it feels companies aren’t doing enough.

After attending a meeting on the topic today, Andrus Ansip, the European Commissioner for Digital Single Market, tweeted to say the main areas tech firms need to be addressing are that “takedown should be fast, reliable, effective; pro-activity to detect, remove and disable content using automatic detection and filtering; adequate safeguards and counter notice”.

While the notion of tech giants effectively removing illegal content might be hard to object to in principle, such a laundry list of requirements underlines the complexities involved in pushing commercial businesses to execute context-based speech policing decisions in a hurry.

For example, a new social media hate speech law in Germany, which as of this month is being actively enforced, has already draw criticism and calls for its abolition after Twitter blocked a satirical magazine that had parodied anti-Muslim comments made by the far-right Alternative for Germany political party.

Another problematic aspect to the Commission’s push is it appears keen to bundle up a very wide spectrum of ‘illegal content’ into the same response category — apparently aiming to conflate issues as diverse as hate speech, terrorism, child exploitation and copyrighted content.

In September the EC put out a set of “guidelines and principles” which it said were aimed at pushing tech firms to be more pro-active about takedowns of illegal content, and specifically urging them to build tools to automate flagging and re-uploading of such content. But the measures were quickly criticized for being overly vague and posing a risk to freedom of expression online.

It’s not clear what kind of “adequate safeguards” Ansip is implying could be baked into the auto-detection and filtering systems the EC wants (we’ve asked and will update this story with any response). But there’s a clear risk that an over-emphasis on pushing tech giants to automate takedowns could result in censorship of controversial content on mainstream platforms.

There’s no public sign the Commission has picked up on these specific criticisms, with its latest missive flagging up both “violent and extremist content” but also “breaches of intellectual property rights” as targets.

Last fall the Commission said it would monitor tech giants’ progress vis-a-vis content takedowns over the next six months to decide whether to take additional measures — such a drafting legislation. Though it has also previously lauded progress being made.

In a statement yesterday, ahead of today’s meeting, the EC kept up the pressure on tech firms — calling for “more efforts and progress”:

The Commission is counting on online platforms to step up and speed up their efforts to tackle these threats quickly and comprehensively, including closer cooperation with national and enforcement authorities, increased sharing of know-how between online players and further action against the reappearance of illegal content.

We will continue to promote cooperation with social media companies to detect and remove terrorist and other illegal content online, and if necessary, propose legislation to complement the existing regulatory framework.

In the face of rising political pressure and a series of content-related scandals, both Google and Facebook last year announced they would be beefing up their content moderation teams by thousands of extra staff apiece.

Featured Image: nevodka/iStock Editorial

YouTube: More AI can fix AI-generated ‘bubbles of hate’


Facebook, YouTube and Twitter faced another online hate crime grilling today by UK parliamentarians visibly frustrated at their continued failures to apply their own community guidelines and take down reported hate speech.

The UK government has this year pushed to raise online radicalization and extremist content as a G7 priority — and has been pushing for takedown timeframes for extremist content to shrink radically.

While the broader issue of online hate speech has continued to be a hot button political issue, especially in Europe — with Germany passing a social media hate speech law in October. And the European Union’s executive body pushing for social media firms to automate the flagging of illegal content to accelerate takedowns.

In May, the UK’s Home Affairs Committee also urged the government to consider a regime of fines for social media content moderation failures — accusing tech giants of taking a “laissez-faire approach” to moderating hate speech content on their platforms.

It revisited their performance in another public evidence sessions today.

“What it is that we have to do to get you to take it down?”

Addressing Twitter, Home Affairs Committee chair Yvette Cooper said her staff had reported a series of violent, threatening and racist tweets via the platform’s standard reporting systems in August — many of which still had not been removed, months on.

She did not try to hide her exasperation as she went on to question why certain antisemitic tweets previously raised by the committee during an earlier public evidence session had also still not been removed — despite Twitter’s Nick Pickles agreeing at the time that they broke its community standards.

“I’m kind of wondering what it is we have to do,” said Cooper. “We sat in this committee in a public hearing and raised a clearly vile antisemitic tweet with your organization… but it is still there on the platform — what it is that we have to do to get you to take it down?”

Twitter’s EMEA VP for public policy and communications, Sinead McSweeney, who was fielding questions on behalf of the company this time, agreed that the tweets in question violated Twitter’s hate speech rules but said she was unable to provide an explanation for why they had not been taken down.

She noted the company has newly tightened its rules on hate speech — and said specifically that it has raised the priority of bystander reports, whereas previously it would have placed more priority on a report if the person who was the target of the hate was also the one reporting it.

“We haven’t been good enough at this,” she said. “Not only we haven’t been good enough at actioning, but we haven’t been good enough at telling people when we have actioned. And that is something that — particularly over the last six months — we have worked very hard to change… so you will definitely see people getting much, much more transparent communication at the individual level and much, much more action.”

“We are now taking actions against 10 times more accounts than we did in the past,” she added.

Cooper then turned her fire on Facebook, questioning the social media giant’s public policy director, Simon Milner, about Facebook pages containing violent anti-Islamic imagery, including one that appeared to be encouraging the bombing of Mecca, and pages set up to share photos of schoolgirls for the purposes of sexual gratification.

He claimed Facebook has fixed the problem of “lurid” comments being able to posted on otherwise innocent photographs of children shared on its platform — something YouTube has also recently been called out for — telling the committee: “That was a fundamental problem in our review process that has now been fixed.”

Cooper then asked whether the company is living up to its own community standards — which Milner agreed do not permit people or organizations that promote hate against protected groups to have a presence on its platform. “Do you think that you are strong enough on Islamophobic organizations and groups and individuals?” she asked.

Milner avoided answering Cooper’s general question, instead narrowing his response to the specific individual page the committee had flagged — saying it was “not obviously run by a group” and that Facebook had taken down the specific violent image highlighted by the committee but not the page itself.

“The content is disturbing but it is very much focused on the religion of Islam, not on Muslims,” he added.

This week a decision by Twitter to close the accounts of far right group Britain First has swiveled a critical spotlight on Facebook — as the company continues to host the same group’s page, apparently preferring to selectively remove individual posts even though Facebook’s community standards forbid hate groups if they target people with protected characteristics (such as religion, race and ethnicity).

Cooper appeared to miss an opportunity to press Milner on the specific point — and earlier today the company declined to respond when we asked why it has not banned Britain First.

Giving an update earlier in the session, Milner told the committee that Facebook now employs over 7,500 people to review content — having announced a 3,000 bump in headcount earlier this year — and said that overall it has “around 10,000 people working in safety and security” — a figure he said it will be doubling by the end of 2018.

Areas where he said Facebook has made the most progress vis-a-vis content moderation are around terrorism, and nudity and pornography (which he noted is not permitted on the platform).

Google’s Nicklas Berild Lundblad, EMEA VP for public policy, was also attending the session to field questions about YouTube — and Cooper initially raised the issue of racist comments not being taken down despite being reported.

He said the company is hoping to be able to use AI to automatically pick up these types of comments. “One of the things that we want to get to is a situation in which we can actively use machines in order to scan comments for attacks like these and remove them,” he said.

Cooper pressed him on why certain comments reported to it by the committee had still not been removed — and he suggested reviewers might still be looking at a minority of the comments in question.

She flagged a comment calling for an individual to be “put down” — asking why that specifically had not been removed. Lundblad agreed it appeared to be in violation of YouTube’s guidelines but appeared unable to provide an explanation for why it was still there.

Cooper then asked why a video, made by the neo-nazi group National Action — which is proscribed as a terrorist group and banned in the UK, had kept reappearing on YouTube after it had been reported and taken down — even after the committee raised the issue with senior company executives.

Eventually, after “about eight months” of the video being repeatedly reposted on different accounts, she said it finally appears to have gone.

But she contrasted this sluggish response with the speed and alacrity with which Google removes copyrighted content from YouTube. “Why did it take that much effort, and that long just to get one video removed?” she asked.

“I can understand that’s disappointing,” responded Lundblad. “They’re sometimes manipulated so you have to figure out how they manipulated them to take the new versions down.

“And we’re now looking at removing them faster and faster. We’ve removed 135 of these videos some of them within a few hours with no more than 5 views and we’re committed to making sure this improves.”

He also claimed the rollout of machine learning technology has helped YouTube improve its takedown performance, saying: “I think that we will be closing that gap with the help of machines and I’m happy to review this in due time.”

“I really am sorry about the individual example,” he added.

Pressed again on why such a discrepancy existed between the speed of YouTube copyright takedowns and terrorist takedowns, he responded: “I think that we’ve seen a sea change this year” — flagging the committee’s contribution to raising the profile of the problem and saying that as a result of increased political pressure Google has recently expanded its use of machine learning to additional types of content takedowns.

In June, facing rising political pressure, the company announced it would be ramping up AI efforts to try to speed up the process of identifying extremist content on YouTube.

After Lundblad’s remarks, Cooper then pointed out that the same video still remains online on Facebook and Twitter — querying why all threee companies haven’t been sharing data about this type of proscribed content, despite their previously announced counterterrorism data-sharing partnership.

Milner said the hash database they jointly contribute to is currently limited to just two global terrorism organizations: ISIS and Al-Qaeda, so would not therefore be picking up content produced by banned neo-nazi or far right extremist groups.

Pressed again by Cooper reiterating that National Action is a banned group in the UK, Milner said Facebook has to-date focused its counterterrorism takedown efforts on content produced by ISIS and Al-Qaeda, claiming they are “the most extreme purveyors of this kind of viral approach to distributing their propaganda”.

“That’s why we’ve addressed them first and foremost,” he added. “It doesn’t mean we’re going to stop there but there is a difference between the kind of content they’re producing which is more often clearly illegal.”

“It’s incomprehensible that you wouldn’t be sharing this about other forms of violent extremism and terrorism as well as ISIS and Islamist extremism,” responded Cooper.

“You’re actually actively recommending… racist material”

She then moved on to interrogate the companies on the problem of ‘algorithmic extremism’ — saying that after her searches for the National Action video her YouTube recommendations included a series of far right and racist videos and channels.

“Why am I getting recommendations from YouTube for some pretty horrible organizations,” she asked?

Lundblad agreed YouTube’s recommendation engine “clearly becomes a problem” in certain types of offensive content scenarios — “where you don’t want people to end up in a bubble of hate, for example”. But said YouTube is working on ways to remove certain videos from being surfaceable via its recommended engine.

“One of the things that we are doing… is we’re trying to find states in which videos will have no recommendations and not impact recommendations at all — so we’re limiting the features,” he said. “Which means that those videos will not have recommendations, they will be behind an interstitial, they will not have any comments etc.

“Our way to then address that is to achieve the scale we need, make sure we use machine learning, identify videos like this, limit their features and make sure that they don’t turn up in the recommendations as well.”

So why hasn’t YouTube already put a channel like Red Ice TV into limited state yet, asked Cooper, naming one of the channels the recommendation engine had been pushing her to view? “It’s not simply that you haven’t removed it… You’re actually actively recommending it to me — you are actually actively recommending what is effectively racist material [to] people.”

Lundblad said he would ask that the channel be looked at — and get back to the committee with a “good and solid response”.

“As I said we are looking at how we can scale those new policies we have out across areas like hate speech and racism and we’re six months into this and we’re not quite there yet,” he added.

Cooper then pointed out that the same problem of extremist-promoting recommendation engines exists with Twitter, describing how after she had viewed a tweet by a right wing newspaper columnist she had then been recommended the account of the leader of a UK far right hate group.

“This is the point at which there’s a tension between how much you use technology to find bad content or flag bad content and how much you use it to make the user experience different,” said McSweeney in response to this line of questioning.

“These are the balances and the risks and the decisions we have to take. Increasingly… we are looking at how do we label certain types of content that they are never recommended but the reality is that the vast majority of a user’s experience on Twitter is something that they control themselves. They control it through who they follow and what they search for.”

Noting that the problem affects all three platforms, Cooper then directly accused the companies of operating radicalizing algorithmic information hierarchies — “because your algorithms are doing that grooming and that radicalization”, while the companies in charge of the technology are not stopping it.

Milner said he disagreed with her assessment of what the technology is doing but agreed there’s a shared problem of “how do we address that person who may be going down a channel… leading to them to be radicalized”.

He also claimed Facebook sees “lots of examples of the opposite happening” and of people coming online and encountering “lots of positive and encouraging content”.

Lundblad also responded to flag up a YouTube counterspeech initiative — called Redirect, that’s currently only running in the UK — that aims to catch people who are searching for extremist messages and redirect them to other content debunking the radicalizing narratives.

“It’s first being used for anti-radicalization work and the idea now is to catch people who are in the funnel of vulnerability, break that and take them to counterspeech that will debunk the myths of the Caliphate for example,” he said.

Also responding to the accusation, McSweeney argued for “building strength in the audience as much as blocking those messages from coming”.

In a series of tweets after the committee session, Cooper expressed continued discontent at the companies’ performance tackling online hate speech.

“Still not doing enough on extremism & hate crime. Increase in staff & action since we last saw them in Feb is good but still too many serious examples where they haven’t acted,” she wrote.

“Disturbed that if you click on far right extremist @YouTube videos then @YouTube recommends many more — their technology encourages people to get sucked in, they are supporting radicalisation.

“Committee challenged them on whether same is happening for Jihadi extremism. This is all too dangerous to ignore.”

“Social media companies are some of the biggest & richest in the world, they have huge power & reach. They can and must do more,” she added.

None of the companies responded to a request to respond to Cooper’s criticism that they are still failing to do enough to tackle online hate crime.

Featured Image: Atomic Imagery/Getty Images

Study: Russia-linked fake Twitter accounts sought to spread terrorist-related social division in the UK


A study by UK academics looking at how fake social media accounts were used to spread socially divisive messages in the wake of a spate of domestic terrorists attacks this year has warned that the problem of hostile interference in public debate is greater than previously thought.

The researchers, who are from Cardiff University’s Crime and Security Research Institute, go on to assert that the weaponizing of social media to exacerbate societal division requires “a more sophisticated ‘post-event prevent’ stream to counter-terrorism policy”.

“Terrorist attacks are designed as forms of communicative violence that send a message to ‘terrorise, polarise and mobilise’ different segments of the public audience. These kinds of public impacts are increasingly shaped by social media communications, reflecting the speed and scale with which such platforms can make information ‘travel’,” they write.

“Importantly, what happens in the aftermath of such events has been relatively neglected by research and policy-development.”

The researchers say they collected a dataset of ~30 million datapoints from various social media platforms. But in their report they zero in on Twitter, flagging systematic use of Russian linked sock-puppet accounts which amplified the public impacts of four terrorist attacks that took place in the UK this year — by spreading ‘framing and blaming’ messaging around the attacks at Westminster Bridge, Manchester Arena, London Bridge and Finsbury Park.

They highlight eight accounts — out of at least 47 they say they identified as used to influence and interfere with public debate following the attacks — that were “especially active”, and which posted at least 427 tweets across the four attacks that were retweeted in excess of 153,000 times. Though they only directly name three of them: @TEN_GOP (a right-wing, anti-Islam account); @Crystal1Jonson (a pro-civil rights account); and @SouthLoneStar (an anti-immigration account) — all of which have previously been shuttered by Twitter. (TechCrunch understands the full list of accounts the researchers identified as Russia-linked has not currently been shared with Twitter.)

Their analysis found that the controllers of the sock puppets were successful at getting information to ‘travel’ by building false accounts around personal identities, clear ideological standpoints and highly opinionated views, and by targeting their messaging at sympathetic ‘thought communities’ aligned with the views they were espousing, and also at celebrities and political figures with large follower bases in order to “‘boost’ their ‘signal’” — “The purpose being to try and stir and amplify the emotions of these groups and those who follow them, who are already ideologically ‘primed’ for such messages to resonate.”

The researchers say they derived the identities of the 47 Russian accounts from several open source information datasets — including releases via the US Congress investigations pertaining to the spread of disinformation around the 2016 US presidential election; and the Russian magazine РБК — although there’s no detailed explanation of their research methodology in their four-page policy brief.

They claim to have also identified around 20 additional accounts which they say possess “similar ‘signature profiles’” to the known sock puppets — but which have not been publicly identified as linked to the Russian troll farm, the Internet Research Agency, or similar Russian-linked units.

While they say a number of the accounts they linked to Russia were established “relatively recently”, others had been in existence for a longer period — with the first appearing to have been set up in 2011, and another cluster in the later part of 2014/early 2015.

The “quality of mimicry” being used by those behind the false accounts makes them “sometimes very convincing and hard to differentiate from the ‘real’ thing”, they go on to assert, further noting: “This is an important aspect of the information dynamics overall, inasmuch as it is not just the spoof accounts pumping out divisive and ideologically freighted communications, they are also engaged in seeking to nudge the impacts and amplify the effects of more genuine messengers.”

‘Genuine messengers’ such as a Nigel Farage — aka one of the UK politicians directly cited in the report as having had messages addressed to him by the fake accounts in the hopes he would then apply Twitter’s retweet function to amplify the divisive messaging. (Farage was leader of UKIP, one of the political parties that campaigned for Brexit and against immigration.)

Far right groups have also used the same technique to spread their own anti-immigration messaging via the medium of president Trump’s tweets — in one recent instance earning the president a rebuke from the UK’s Prime Minister, Theresa May.

Last month May also publicly accused Russia of using social media to “weaponize information” and spread socially divisive fake news on social media, underscoring how the issue has shot to the top of the political agenda this year.

“The involvement of overseas agents in shaping the public impacts of terrorist attacks is more complex and troubling than the journalistic coverage of this story has implied,” the researchers write in their assessment of the topic.

They go on to claim there’s evidence for “interventions” involving a greater volume of fake accounts than has been documented thus far; spanning four of the UK terror attacks that took place earlier this year; that measures were targeted to influence opinions and actions simultaneously across multiple positions on the ideological spectrum; and that activities were not just being engaged by Russian units — but with European and North American right-wing groups also involved.

They note, for example, having found “multiple examples” of spoof accounts trying to “propagate and project very different interpretations of the same events” which were “consistent with their particular assumed identities” — citing how a photo of a Muslim woman walking past the scene of the Westminster bridge attack was appropriate by the fake accounts and used to drive views on either side of the political spectrum:

The use of these accounts as ‘sock puppets’ was perhaps one of the most intriguing aspects of the techniques of influence on display. This involved two of the spoof accounts commenting on the same elements of the terrorist attacks, during roughly the same points in time, adopting opposing standpoints. For example, there was an infamous image of a Muslim woman on Westminster Bridge walking past a victim being treated, apparently ignoring them. This became an internet meme propagated by multiple far-right groups and individuals, with about 7,000 variations of it according to our dataset. In response to which the far right aligned @Ten_GOP tweeted: She is being judged for her own actions & lack of sympathy. Would you just walk by? Or offer help? Whereas, @ Crystal1Johnson’s narrative was: so this is how a world with glasses of hate look like – poor woman, being judged only by her clothes.

The study authors do caveat that as independent researchers it is difficult for them to guarantee ‘beyond reasonable doubt’ that the accounts they identified were Russian-linked fakes — not least because they’ve been deleted (and the study is based off of analysis of digital traceries left from online interactions).

But they also assert that given the difficulties of identifying such sophisticated fakes, there are likely more of them than they were able to spot. For this study, for example, they note that the fake accounts were more likely to have been concerned with American affairs, rather than British or European issues — suggesting more fakes could have flown under the radar because more attention has been directed at trying to identify fake accounts targeting US issues.

A Twitter spokesman declined to comment directly on the research but the company has previously sought to challenge external researchers’ attempts to quantify how information is diffused and amplified on its platform by arguing they do not have the full picture of how Twitter users are exposed to tweets and thus aren’t well positioned to quantify the impact of propaganda-spreading bots.

Specifically it says that safe search and quality filters can erode the discoverability of automated content — and claims these filters are enabled for the vast majority of its users.

Last month, for example, Twitter sought to play down another study that claimed to have found Russian linked accounts sent 45,000 Brexit related tweets in the 48 hours around the UK’s EU in/out referendum vote last year.

The UK’s Electoral Commission is currently looking at whether existing campaign spending rules were broken via activity on digital platforms during the Brexit vote. While a UK parliamentary committee is also running a wider enquiry aiming to articulate the impact of fake news.

Twitter has since provided UK authorities with information on Russian linked accounts that bought paid ads related to Brexit — though not apparently with a fuller analysis of all tweets sent by Russian-linked accounts. Actual paid ads are clearly the tip of the iceberg when there’s no financial barrier to entry to setting up as many fake accounts as you like to tweet out propaganda.

As regards this study, Twitter also argues that researchers with only access to public data are not well positioned to definitively identify sophisticated state-run intelligence agency activity that’s trying to blend in with everyday social networking.

Though the study authors’ view on the challenge of unmasking such skillful sock puppets is they are likely underestimating the presence of hostile foreign agents, rather than overblowing it.

Twitter also provided us with some data on the total number of tweets about three of the attacks in the 24 hours afterwards — saying that for the Westminster attack there were more than 600k tweets; for Manchester there were more than 3.7M; and for the London Bridge attack there were more than 2.6M — and asserting that the intentionally divisive tweets identified in the research represent a tiny fraction (less than 0.01%) of the total tweets sent in the 24 hour period following each attack.

Although the key issue here is influence, not quantity of propaganda per se — and quantifying how opinions might have been skewed by fake accounts is a lot trickier.

But growing awareness of hostile foreign information manipulation taking place on mainstream tech platforms is not likely to be a topic most politicians would be prepared to ignore.

In related news, Twitter today said it will begin enforcing new rules around how it handles hateful conduct and abusive behavior on its platform — as it seeks to grapple with a growing backlash from users angry at its response to harassment and hate speech.

Featured Image: Bryce Durbin/TechCrunch/Getty Images

In major policy change YouTube is now taking down more videos of known extremists


Google has confirmed a major policy shift in how it approaches extremist content on YouTube. A spokeswoman told us it has broadened its policy for taking down extremist content: Not just removing videos that directly preach hate or seek to incite violence but also removing other videos of named terrorists, unless the content is journalistic or educational in nature — such as news reports and documentaries.

The change was reported earlier by Reuters, following a report by the New York Times on Monday saying YouTube had drastically reduced content showing sermons by jihadist cleric, Anwar al-Awlaki — eliminating videos where the radical cleric is not directly preaching hate but talking on various, ostensibly non-violent topics.

al-Awlaki was killed in a US drone strike six years ago but has said to have remained the leading English-language jihadist recruiter because of there being such an extensive and easily accessible digital legacy of his sermons.

In a phone call with TechCrunch a YouTube spokeswoman confirmed that around 50,000 videos of al-Awlaki’s lectures have been removed at this point.

There is still al-Awlaki content on YouTube — and the spokeswoman stressed there will never be zero videos returned for a search for his name. But said the aim is to remove content created by known extremists and disincentivize others from reuploading the same videos.

Enacting the policy will be an ongoing process, she added.

She said the policy change has come about as a result of YouTube working much more closely with a network of NGO experts working in this space and also participating in its community content policing trusted flaggers program — who have advised it that even sermons that do not ostensibly preach hate can be part of a wider narrative used by jihadi extremists to radicalize and recruit.

This year YouTube and other user generated content platforms have also come under increasing political pressure to take a tougher stance on extremist content. While YouTube has also faced an advertiser backlash when ads were found being displayed alongside extremist content.

In June the company announced a series of measures aimed at expanding its efforts to combat jihadi propaganda — including expanding its use of AI tech to automatically identify terrorist content; adding 50 “expert NGOs” to its trusted flagger program; and growing counter-radicalization efforts — such as returning content which deconstructs and debunks jihadist views when a user searches for certain extremist trigger words.

However, at that time, YouTube rowed back from taking down non-violent extremist content. Instead it said it would display interstitial warnings on videos that contain “inflammatory religious or supremacist content”, and also remove the ability of uploaders to monetize this type of content.

“We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” said Google SVP and general counsel, Kent Walker, at the time.

Evidently it’s now decided it was not, in fact, striking the right balance by continuing to host and provide access to sermons made by extremists. And has now redrawn its policy line to shrink access to anything made by known terrorists.

According to YouTube’s spokeswoman, it’s working off of government lists of named terrorists and foreign terrorist organizations to identify individuals for whom the wider takedown policy will apply.

She confirmed that all content currently removed under the new wider policy pertains to al-Awlaki. But the idea is for this to expand to takedowns of other non-violent videos from other listed extremists.

It’s not clear whether these lists are public at this point. The spokeswoman indicated it’s YouTube’s expectation they will be public, and said the company will communicate with government departments on that transparency point.

She said YouTube is relying on its existing moderating teams to enact the expanded policy, using a mix of machine learning technology to help identify content and human review to understand the context.

She added that YouTube has been thinking about adapting its policies for extremist content for more than a year — despite avoiding taking this step in June.

She also sought to play down the policy shift as being a response to pressure from governments to crack down on online extremism — saying rather it’s come about as a result of YouTube engaging with and listening to experts.

Tech giants pressured to auto-flag “illegal” content in Europe


Social media giants have again been put on notice that they need to do more to speed up removals of hate speech and other illegal content from their platforms in the European Union.

The bloc’s executive body, the European Commission today announced a set of “guidelines and principles” aimed at pushing tech platforms to be more pro-active about takedowns of content deemed a problem. Specifically it’s urging they build tools to automate flagging and re-uploading of such content.

“The increasing availability and spreading of terrorist material and content that incites violence and hatred online is a serious threat to the security and safety of EU citizens,” it said in a press release, arguing that illegal content also “undermines citizens’ trust and confidence in the digital environment” and can thus have a knock on impact on “innovation, growth and jobs”.

“Given their increasingly important role in providing access to information, the Commission expects online platforms to take swift action over the coming months, in particular in the area of terrorism and illegal hate speech — which is already illegal under EU law, both online and offline,” it added.

In a statement on the guidance, VP for the EU’s Digital Single Market, Andrus Ansip, described the plan as “a sound EU answer to the challenge of illegal content online”, and added: “We make it easier for platforms to fulfil their duty, in close cooperation with law enforcement and civil society. Our guidance includes safeguards to avoid over-removal and ensure transparency and the protection of fundamental rights such as freedom of speech.”

The move follows a voluntary Code of Conduct, unveiled by the Commission last year, with Facebook, Twitter, Google’s YouTube and Microsoft signed up to agree to remove illegal hate speech which breaches their community principles in less than 24 hours.

In a recent assessment of how that code is operating on hate speech takedowns the Commission said there had been some progress. But it’s still unhappy that a large portion (it now says ~28%) of takedowns are still taking as long as a week.

It said it will monitor progress over the next six months to decide whether to take additional measures — including the possibility of proposing legislative if it feels not enough is being done.

Its assessment (and possible legislative proposals) will be completed by May 2018. After which it would need to put any proposed new rules to the European Parliament for MEPs to vote on, as well as to the European Council. So it’s likely there would be challenges and amendments before a consensus could be reached on any new law.

Some individual EU member states have been pushing to go further than the EC’s voluntary code of conduct on illegal hate speech on online platforms. In April, for example, the German cabinet backed proposals to hit social media firms with fines of up to €50 million if they fail to promptly remove illegal content.

A committee of UK MPs also called for the government to consider similar moves earlier this year. While the UK prime minister has led a push by G7 nations to ramp up pressure on social media firms to expedite takedowns of extremist material in a bid to check the spread of terrorist propaganda online.

That drive goes even further than the current EC Code of Conduct — with a call for takedowns of extremist material to take place within two hours.

However the EC’s proposals today on tackling illegal content online appears to be attempting to pass guidance across a rather more expansive bundle of content, saying the aim is to “mainstream good procedural practices across different forms of illegal content” — so apparently seeking to roll hate speech, terrorist propaganda and child exploitation into the same “illegal” bundle as copyrighted content. Which makes for a far more controversial mix.

(The EC does explicitly state the measures are not intended to be applied in respect of “fake news”, noting this is “not necessary illegal”, ergo it’s one online problem it’s not seeking to stuff into this conglomerate bundle. “The problem of fake news will be addressed separately,” it adds.)

The Commission has divided its set of illegal content “guidelines and principles” into three areas — which it explains as follows:

  • “Detection and notification”: On this it says online platforms should cooperate more closely with competent national authorities, by appointing points of contact to ensure they can be contacted rapidly to remove illegal content. “To speed up detection, online platforms are encouraged to work closely with trusted flaggers, i.e. specialised entities with expert knowledge on what constitutes illegal content,” it writes. “Additionally, they should establish easily accessible mechanisms to allow users to flag illegal content and to invest in automatic detection technologies”
  • “Effective removal”: It says illegal content should be removed “as fast as possible” but also says it “can be subject to specific timeframes, where serious harm is at stake, for instance in cases of incitement to terrorist acts”. It adds that it intends to further analyze the specific timeframes issue. “Platforms should clearly explain to their users their content policy and issue transparency reports detailing the number and types of notices received. Internet companies should also introduce safeguards to prevent the risk of over-removal,” it adds.
  • “Prevention of re-appearance”: Here it says platforms should take “measures” to dissuade users from repeatedly uploading illegal content. “The Commission strongly encourages the further use and development of automatic tools to prevent the re-appearance of previously removed content,” it adds.

Ergo, that’s a whole lot of “automatic tools” the Commission is proposing commercial tech giants build to block the uploading of a poorly defined bundle of “illegal content”.

Given the mix of vague guidance and expansive aims — to apparently apply the same and/or similar measures to tackle issues as different as terrorist propaganda and copyrighted material — the guidelines have unsurprisingly drawn swift criticism.

MEP Jan Philip Albrecht, for example, couched them as “vague requests”, and described the approach as “neither effective” (i.e. in its aim of regulating tech platforms) nor “in line with rule of law principles”. He added a big thumbs down.

He’s not the only European politician with that criticism, either. Other MEPs have warned the guidance is a “step backwards” for the rule of law online — seizing specifically on the Commission’s call for automatic tools to prevent illegal content being re-uploaded as a move towards upload-filters (which is something the executive has been pushing for as part of its controversial plan to reform the bloc’s digital copyright rules).

“Installing censorship infrastructure that surveils everything people upload and letting algorithms make judgement calls about what we all can and cannot say online is an attack on our fundamental rights,” writes MEP Julia Redia in another response condemning the Commission’s plan. She then goes on to list a series of examples where algorithmic filtering failed…

While MEP Marietje Schaake blogged with a warning about making companies “the arbiters of limitations of our fundamental rights”. “Unfortunately the good parts on enhancing transparency and accountability for the removal of illegal content are completely overshadowed by the parts that encourage automated measures by online platforms,” she added.

European digital rights group the EDRI, which campaigns for free speech across the region, is also eviscerating in its response to the guidance, arguing that: “The document puts virtually all its focus on Internet companies monitoring online communications, in order to remove content that they decide might be illegal. It presents few safeguards for free speech, and little concern for dealing with content that is actually criminal.”

“The Commission makes no effort at all to reflect on whether the content being deleted is actually illegal, nor if the impact is counterproductive. The speed and proportion of removals is praised simply due to the number of takedowns,” it added, concluding that: “The Commission’s approach of fully privatising freedom of expression online, it’s almost complete indifference diligent assessment of the impacts of this privatisation.”