Tech giants pressured to auto-flag “illegal” content in Europe

 Social media giants have again been put on notice that they need to do more to speed up removals of hate speech and other illegal content from their platforms in the European Union. The bloc’s executive body, the European Commission today announced a set of “guidelines and principles” aimed at pushing tech platforms to be more pro-active about takedowns of content deemed… Read More


Social media giants have again been put on notice that they need to do more to speed up removals of hate speech and other illegal content from their platforms in the European Union.

The bloc’s executive body, the European Commission today announced a set of “guidelines and principles” aimed at pushing tech platforms to be more pro-active about takedowns of content deemed a problem. Specifically it’s urging they build tools to automate flagging and re-uploading of such content.

“The increasing availability and spreading of terrorist material and content that incites violence and hatred online is a serious threat to the security and safety of EU citizens,” it said in a press release, arguing that illegal content also “undermines citizens’ trust and confidence in the digital environment” and can thus have a knock on impact on “innovation, growth and jobs”.

“Given their increasingly important role in providing access to information, the Commission expects online platforms to take swift action over the coming months, in particular in the area of terrorism and illegal hate speech — which is already illegal under EU law, both online and offline,” it added.

In a statement on the guidance, VP for the EU’s Digital Single Market, Andrus Ansip, described the plan as “a sound EU answer to the challenge of illegal content online”, and added: “We make it easier for platforms to fulfil their duty, in close cooperation with law enforcement and civil society. Our guidance includes safeguards to avoid over-removal and ensure transparency and the protection of fundamental rights such as freedom of speech.”

The move follows a voluntary Code of Conduct, unveiled by the Commission last year, with Facebook, Twitter, Google’s YouTube and Microsoft signed up to agree to remove illegal hate speech which breaches their community principles in less than 24 hours.

In a recent assessment of how that code is operating on hate speech takedowns the Commission said there had been some progress. But it’s still unhappy that a large portion (it now says ~28%) of takedowns are still taking as long as a week.

It said it will monitor progress over the next six months to decide whether to take additional measures — including the possibility of proposing legislative if it feels not enough is being done.

Its assessment (and possible legislative proposals) will be completed by May 2018. After which it would need to put any proposed new rules to the European Parliament for MEPs to vote on, as well as to the European Council. So it’s likely there would be challenges and amendments before a consensus could be reached on any new law.

Some individual EU member states have been pushing to go further than the EC’s voluntary code of conduct on illegal hate speech on online platforms. In April, for example, the German cabinet backed proposals to hit social media firms with fines of up to €50 million if they fail to promptly remove illegal content.

A committee of UK MPs also called for the government to consider similar moves earlier this year. While the UK prime minister has led a push by G7 nations to ramp up pressure on social media firms to expedite takedowns of extremist material in a bid to check the spread of terrorist propaganda online.

That drive goes even further than the current EC Code of Conduct — with a call for takedowns of extremist material to take place within two hours.

However the EC’s proposals today on tackling illegal content online appears to be attempting to pass guidance across a rather more expansive bundle of content, saying the aim is to “mainstream good procedural practices across different forms of illegal content” — so apparently seeking to roll hate speech, terrorist propaganda and child exploitation into the same “illegal” bundle as copyrighted content. Which makes for a far more controversial mix.

(The EC does explicitly state the measures are not intended to be applied in respect of “fake news”, noting this is “not necessary illegal”, ergo it’s one online problem it’s not seeking to stuff into this conglomerate bundle. “The problem of fake news will be addressed separately,” it adds.)

The Commission has divided its set of illegal content “guidelines and principles” into three areas — which it explains as follows:

  • “Detection and notification”: On this it says online platforms should cooperate more closely with competent national authorities, by appointing points of contact to ensure they can be contacted rapidly to remove illegal content. “To speed up detection, online platforms are encouraged to work closely with trusted flaggers, i.e. specialised entities with expert knowledge on what constitutes illegal content,” it writes. “Additionally, they should establish easily accessible mechanisms to allow users to flag illegal content and to invest in automatic detection technologies”
  • “Effective removal”: It says illegal content should be removed “as fast as possible” but also says it “can be subject to specific timeframes, where serious harm is at stake, for instance in cases of incitement to terrorist acts”. It adds that it intends to further analyze the specific timeframes issue. “Platforms should clearly explain to their users their content policy and issue transparency reports detailing the number and types of notices received. Internet companies should also introduce safeguards to prevent the risk of over-removal,” it adds.
  • “Prevention of re-appearance”: Here it says platforms should take “measures” to dissuade users from repeatedly uploading illegal content. “The Commission strongly encourages the further use and development of automatic tools to prevent the re-appearance of previously removed content,” it adds.

Ergo, that’s a whole lot of “automatic tools” the Commission is proposing commercial tech giants build to block the uploading of a poorly defined bundle of “illegal content”.

Given the mix of vague guidance and expansive aims — to apparently apply the same and/or similar measures to tackle issues as different as terrorist propaganda and copyrighted material — the guidelines have unsurprisingly drawn swift criticism.

MEP Jan Philip Albrecht, for example, couched them as “vague requests”, and described the approach as “neither effective” (i.e. in its aim of regulating tech platforms) nor “in line with rule of law principles”. He added a big thumbs down.

He’s not the only European politician with that criticism, either. Other MEPs have warned the guidance is a “step backwards” for the rule of law online — seizing specifically on the Commission’s call for automatic tools to prevent illegal content being re-uploaded as a move towards upload-filters (which is something the executive has been pushing for as part of its controversial plan to reform the bloc’s digital copyright rules).

“Installing censorship infrastructure that surveils everything people upload and letting algorithms make judgement calls about what we all can and cannot say online is an attack on our fundamental rights,” writes MEP Julia Redia in another response condemning the Commission’s plan. She then goes on to list a series of examples where algorithmic filtering failed…

While MEP Marietje Schaake blogged with a warning about making companies “the arbiters of limitations of our fundamental rights”. “Unfortunately the good parts on enhancing transparency and accountability for the removal of illegal content are completely overshadowed by the parts that encourage automated measures by online platforms,” she added.

European digital rights group the EDRI, which campaigns for free speech across the region, is also eviscerating in its response to the guidance, arguing that: “The document puts virtually all its focus on Internet companies monitoring online communications, in order to remove content that they decide might be illegal. It presents few safeguards for free speech, and little concern for dealing with content that is actually criminal.”

“The Commission makes no effort at all to reflect on whether the content being deleted is actually illegal, nor if the impact is counterproductive. The speed and proportion of removals is praised simply due to the number of takedowns,” it added, concluding that: “The Commission’s approach of fully privatising freedom of expression online, it’s almost complete indifference diligent assessment of the impacts of this privatisation.”

Live Updates for COVID-19 CASES