All posts in “terrorism”

Europe to push for one-hour takedown law for terrorist content

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads.

The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet.

For now the ‘rule of thumb’ regime continues to apply. But it’s putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

As per usual EU processes, the Commission’s proposal would need to gain the backing of Member States and the EU parliament before it could be cemented into law.

One major point to note here is that existing EU law does not allow Member States to impose a general obligation on hosting service providers to monitor the information that users transmit or store. But in the proposal the Commission argues that, given the “grave risks associated with the dissemination of terrorist content”, states could be allowed to “exceptionally derogate from this principle under an EU framework”.

So it’s essentially suggesting that Europeans’ fundamental rights might not, in fact, be so fundamental. (Albeit, European judges might well take a different view — and it’s very likely the proposals could face legal challenges should they be cast into law.)

What is being suggested would also apply to any hosting service provider that offers services in the EU — “regardless of their place of establishment or their size”. So, seemingly, not just large platforms, like Facebook or YouTube, but — for example — anyone hosting a blog that includes a free-to-post comment section.

Websites that fail to promptly take down terrorist content would face fines — with the level of penalties being determined by EU Member States (Germany has already legislated to enforce social media hate speech takedowns within 24 hours, setting the maximum fine at €50M).

“Penalties are necessary to ensure the effective implementation by hosting service providers of the obligations pursuant to this Regulation,” the Commission writes, envisaging the most severe penalties being reserved for systematic failures to remove terrorist material within one hour. 

It adds: “When determining whether or not financial penalties should be imposed, due account should be taken of the financial resources of the provider.” So — for example — individuals with websites who fail to moderate their comment section fast enough might not be served the very largest fines, presumably.

The proposal also encourages platforms to develop “automated detection tools” so they can take what it terms “proactive measures proportionate to the level of risk and to remove terrorist material from their services”.

So the Commission’s continued push for Internet pre-filtering is clear. (This is also a feature of the its copyright reform — which is being voted on by MEPs later today.)

Albeit, it’s not alone on that front. Earlier this year the UK government went so far as to pay an AI company to develop a terrorist propaganda detection tool that used machine learning algorithms trained to automatically detect propaganda produced by the Islamic State terror group — with a claimed “extremely high degree of accuracy”. (At the time it said it had not ruled out forcing tech giants to use it.)

What is terrorist content for the purposes of this proposals? The Commission refers to an earlier EU directive on combating terrorism — which defines the material as “information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.

And on that front you do have to wonder whether, for example, some of U.S. president Donald Trump’s comments last year after the far right rally in Charlottesville where a counter protestor was murdered by a white supremacist — in which he suggested there were “fine people” among those same murderous and violent white supremacists might not fall under that ‘glorifying the commission of terrorist offences’ umbrella, should, say, someone repost them to a comment section that was viewable in the EU…

Safe to say, even terrorist propaganda can be subjective. And the proposed regime will inevitably encourage borderline content to be taken down — having a knock-on impact upon online freedom of expression.

The Commission also wants websites and platforms to share information with law enforcement and other relevant authorities and with each other — suggesting the use of “standardised templates”, “response forms” and “authenticated submission channels” to facilitate “cooperation and the exchange of information”.

It tackles the problem of what it refers to as “erroneous removal” — i.e. content that’s removed after being reported or erroneously identified as terrorist propaganda but which is subsequently, under requested review, determined not to be — by placing an obligation on providers to have “remedies and complaint mechanisms to ensure that users can challenge the removal of their content”.

So platforms and websites will be obligated to police and judge speech — which they already do do, of course but the proposal doubles down on turning online content hosters into judges and arbiters of that same content.

The regulation also includes transparency obligations on the steps being taken against terrorist content by hosting service providers — which the Commission claims will ensure “accountability towards users, citizens and public authorities”. 

Other perspectives are of course available… 

The Commission envisages all taken down content being retained by the host for a period of six months so that it could be reinstated if required, i.e. after a valid complaint — to ensure what it couches as “the effectiveness of complaint and review procedures in view of protecting freedom of expression and information”.

It also sees the retention of takedowns helping law enforcement — meaning platforms and websites will continue to be co-opted into state law enforcement and intelligence regimes, getting further saddled with the burden and cost of having to safely store and protect all this sensitive data.

(On that the EC just says: “Hosting service providers need to put in place technical and organisational safeguards to ensure the data is not used for other purposes.”)

The Commission would also create a system for monitoring the monitoring it’s proposing platforms and websites undertake — thereby further extending the proposed bureaucracy, saying it would establish a “detailed programme for monitoring the outputs, results and impacts” within one year of the regulation being applied; and report on the implementation and the transparency elements within two years; evaluating the entire functioning of it four years after it’s coming into force.

The executive body says it consulted widely ahead of forming the proposals — including running an open public consultation, carrying out a survey of 33,500 EU residents, and talking to Member States’ authorities and hosting service providers.

“By and large, most stakeholders expressed that terrorist content online is a serious societal problem affecting internet users and business models of hosting service providers,” the Commission writes. “More generally, 65% of respondent to the Eurobarometer survey considered that the internet is not safe for its users and 90% of the respondents consider it important to limit the spread of illegal content online.

“Consultations with Member States revealed that while voluntary arrangements are producing results, many see the need for binding obligations on terrorist content, a sentiment echoed in the European Council Conclusions of June 2018. While overall, the hosting service providers were in favour of the continuation of voluntary measures, they noted the potential negative effects of emerging legal fragmentation in the Union.

“Many stakeholders also noted the need to ensure that any regulatory measures for removal of content, particularly proactive measures and strict timeframes, should be balanced with safeguards for fundamental rights, notably freedom of speech. Stakeholders noted a number of necessary measures relating to transparency, accountability as well as the need for human review in deploying automated tools.”

Twitter claims more progress on squeezing terrorist content

Twitter has put out its latest Transparency Report providing an update on how many terrorist accounts it has suspended on its platform — with a cumulative 1.2 million+ suspensions since August 2015.

During the reporting period of July 1, 2017 through December 31, 2017 — for this, Twitter’s 12th Transparency Report — the company says a total of 274,460 accounts were permanently suspended for violations related to the promotion of terrorism.

“This is down 8.4% from the volume shared in the previous reporting period and is the second consecutive reporting period in which we’ve seen a drop in the number of accounts being suspended for this reason,” it writes. “We continue to see the positive, significant impact of years of hard work making our site an undesirable place for those seeking to promote terrorism, resulting in this type of activity increasingly shifting away from Twitter.”

Six months ago the company claimed big wins in squashing terrorist activity on its platform — attributing drops in reports of pro-terrorism accounts then to the success of in-house tech tools in driving terrorist activity off its platform (and perhaps inevitably rerouting it towards alternative platforms — Telegram being chief among them, according to experts on online extremism).

At that time Twitter reported a total of 299,649 pro-terrorism accounts had been suspended — which it said was a 20 per cent drop on figures reported for July through December 2016.

So the size of the drops are also shrinking. Though it’s suggesting that’s because it’s winning the battle to discourage terrorists from trying in the first place.

For its latest reporting period, ending December 2017, Twitter says 93% of the accounts were flagged by its internal tech tools — with 74% of those also suspended before their first tweet, i.e. before they’d been able to spread any terrorist propaganda.

Which means that around a quarter of the pro-terrorist accounts did manage to get out at least one terror tweet.

This proportion is essentially unchanged since the last report period (when Twitter reported suspending 75% before their first tweet) — so whatever tools it’s using to automate terror account identification and blocking appear to be in a steady state, rather than gaining in ability to pre-filter terrorist content.

Twitter also specifies that government reports of violations related to the promotion of terrorism represent less than 0.2% of all suspensions in the most recent reporting period — or 597 to be exact.

As with its prior transparency report, a far larger number of Twitter accounts are being reported by governments for “abusive behavior” — which refers to long-standing problems on Twitter’s platform such as hate speech, racism, misogyny and trolling.

And in December a Twitter policy staffer was roasted by UK MPs during a select committee session after the company was again shown failing to remove violent, threatening and racist tweets — which committee staffers had reported months earlier in that case.

Twitter’s latest Transparency Report specifies that governments reported 6,254 Twitter accounts for abusive behavior — yet the company only actioned a quarter of these reports.

That’s still up on the prior reporting period, though, when it reported actioning a paltry 12% of these type of reports.

The issue of abuse and hate speech on online platforms generally has rocketed up the political agenda in recent years, especially in Europe — where Germany now has a tough new law to regulate takedowns.

Platforms’ content moderation policies certainly remain a bone of contention for governments and lawmakers.

Last month the European Commission set out a new rule of thumb for social media platforms — saying it wants them to take down illegal content within an hour of it being reported.

This is not legislation yet, but the threat of EU-wide laws being drafted to regulate content takedowns remains a discussion topic — to encourage platforms to improve performance voluntarily.

Where terrorist content specifically is concerned, the Commission has also been pushing for increased used by tech firms of what it calls “proactive measures”, including “automated detection”.

And in February the UK government also revealed it had commissioned a local AI firm to build an extremist content blocking tool — saying it could decide to force companies to use it.

So political pressure remains especially high on that front.

Returning to abusive content, Twitter’s report specifies that the majority of the tweets and accounts reported to it by governments which it did remove violated its rules in the following areas: impersonation (66%), harassment (16%), and hateful conduct (12%).

This is an interesting shift on the mix from the last reported period when Twitter said content was removed for: harassment (37%), hateful conduct (35%), and impersonation (13%).

It’s difficult to interpret exactly what that development might mean. One possibility is that impersonation could cover disinformation agents, such as Kremlin bots, which Twitter has being suspending in recent months as part of investigations into election interference — an issue that’s been shown to be a problem across social media, from Facebook to Tumblr.

Governments may also have become more focused on reporting accounts to Twitter that they believe are wrappers for foreign agents to spread false information to try to meddle with democratic processes.

In January, for example, the UK government announced it would be setting up a civil service unit to combat state-led disinformation campaigns.

And removing an account that’s been identified as a fake — with the help of government intelligence — is perhaps easier for Twitter than judging whether a particular piece of robust speech might have crossed the line into harassment or hate speech.

Judging the health of conversations on its platform is also something the company recently asked outsiders to help it with. So it doesn’t appear overly confident in making those kind of judgement calls.

Unilever warns social media to clean up “toxic” content


Consumer goods giant Unilever, a maker of branded soaps, foodstuffs and personal care items and also one of the world’s biggest online advertisers, has fired a warning shot across the bows of social media giants by threatening to pull ads from digital platforms if they don’t do more to mitigate the spread of what it dubs “toxic” online content — be it fake news, terrorism or child exploitation.

“It is critical that our brands remain not only in a safe environment, but a suitable one,” CMO Keith Weed is expected to say at the annual Interactive Advertising Bureau conference in California today, according to extracts from the speech provided to us ahead of delivery. “Unilever, as a trusted advertiser, do not want to advertise on platforms which do not make a positive contribution to society.”

The remarks echo comments made last month by UK prime minister Theresa May who singled out social media firms for acute censure, saying they “simply cannot stand by while their platforms are used to facilitate child abusemodern slavery or the spreading of terrorist or extremist content”.

Unilever’s Weed is expected to argue that consumers are worried about “fraudulent practice, fake news, and Russians influencing the U.S. election”, and are sensitive to the brands they buy becoming tainted by associated with ad placement alongside awful stuff like terrorist propaganda and content that exploits children.

“2018 is either the year of techlash, where the world turns on the tech giants — and we have seen some of this already — or the year of trust. The year where we collectively rebuild trust back in our systems and our society,” he will argue.

Online ad giants Facebook and Google have increasingly found themselves on the hook for enabling the spread of socially divisive, offensive and at times out-and-out illegal content via their platforms — in no small part as a consequence of the popularity of their content-sharing hubs.

While the Internet is filled with all sorts of awful stuff, in its darkest corners, the mainstream reach of platforms like Facebook and YouTube puts them squarely in the political firing line for all sorts of content issues — from political disinformation to socially divisive hate speech.

The fact Facebook and Google are also the chief financial beneficiaries of online ad spending — together accounting for around 60 per cent of online ad spending in the US, for example — makes it difficult for them to dodge the charge that their businesses directly benefit from divisive and exploitative content — all the way from clickbait to fake news to full blown online extremism.

Facebook’s 2016 dismissal of concerns about fake news impacting democracy as a “pretty crazy idea” has certainly not aged well. And CEO Mark Zuckerberg has since admitted his platform is broken and made it his personal goal for 2018 to “fix Facebook“.

Both companies faced a growing backlash last year — with a number of advertisers and brands pulling ads from YouTube over concerns about the types of content that their marketing messages were being served alongside, thanks to the programmatic (i.e. automatic) nature of the ad placement. The platform also took renewed flak for the type of content it routinely serves up to kids.

While Facebook got a political grilling over hosting Kremlin disinformation — though Russia’s online dis-ops clearly sprawl across multiple tech platforms. But again, Facebook’s massive reach gifts it a greater share of blame — as the most effective channel (at least that we currently know of) for political disinformation muck spreading. (Last fall, for example, it was forced to admit that ~80,000 pieces of Russian-backed content may have been viewed by 126M Facebook users during the 2016 US election.)

Facebook has been working on adding ad transparency tools to its platform — though it remains to be seen whether it can do enough to be judged to be effectively self regulating. It doesn’t have the greatest record on that front, frankly speaking.

Last year Google also responded with alacrity to boycotts by its own advertisers, saying it would expand controls for brands to give them more say over where their ads appeared on YouTube, and by taking “a tougher stance on hateful, offensive and derogatory content” — including demonitizing more types of videos. And has made a policy change on known terrorists’ content. Though it has continued to disappoint politicians demanding better moderation.

As part of its attempts to de-risk the user generated content that its business relies on, and thus avoid the risk of further spooking already spooked advertisers, Google even recently began removing YouTube videos of the so-called ‘Tide Pod Challenge’ — i.e. where people film themselves trying to consume laundry detergent. Videos which it had previously left up, despite having a policy against content that encourages dangerous activities.

Incidentally Tide Pods aren’t a Unilever brand but their parent company, Procter & Gamble, also roasted social media firms last year — calling for them to “grow up” and slamming the “non-traditional media supply chain” for being “murky at best, and fraudulent at worst”.

Unilever’s Weed also takes aim at ad fraud in his speech, noting how it’s partnered with IBM to pilot a new blockchain tech for advertising — which he touts as having “the potential to drastically reduce advertising fraud by recording how media is purchased, delivered and interacted with by target audiences, providing reliable measurement metrics”. (Can blockchain really fix click fraud? That Unilever is actively entertaining the idea arguably shows how far trust levels in the digital ad space have fallen.)

But the main message is tilted at social media giants’ need to “build social responsibility” — and invest in trust and transparency to avoid damaging the precious substance known as ‘brand trust’ which the tech giants’ revenue-generating digital advertisers depend on.

Though, blockchain experiments aside, Unilever seems rather less publicly clear on exactly what it thinks tech giants should do to vanquish the toxic content their business models have (inadvertently or otherwise) been financially incentivizing.

Governments in Europe have been leaning on social media giants to accelerate development of tech tools that can automatically flag and even remove problem content (such as hate speech) before it has a chance to spread — though that approach is hardly uncontroversial, and critics argue it whiffs of censorship.

Germany has even passed a hate speech social media law, introducing fines of up to €50M for platforms that fail to promptly remove illegal content.

While, earlier this month, Germany’s national competition regulator also announced a probe of the online ad sector — citing concerns that a lack of transparency could be skewing market conditions.

Weed’s message to social media can be summed up as: This is a problem we’ll work with you to fix, but you need to agree to work on fixing it. “As a brand-led business, Unilever needs its consumers to have trust in our brands,” he’ll say. “We can’t do anything to damage that trust -– including the choice of channels and platforms we use. So, 2018 is the year when social media must win trust back.”

Unilever is making three specific “commitments” relating to its digital media supply chain:

  1. that it will not invest in “platforms or environments that do not protect our children or which create division in society, and promote anger or hate”, further emphasizing: “We will prioritise investing only in responsible platforms that are committed to creating a positive impact in society”
  2. that it is committed to creating “responsible content” — with an initial focus on tackling gender stereotypes in advertising
  3. that it will push for what it dubs “responsible infrastructure”, saying it will only partner with organizations “which are committed to creating better digital infrastructure, such as aligning around one measurement system and improving the consumer experience”

So, while the company is not yet issuing an explicit ultimatum to Facebook and Google, it’s certainly putting them on notice that the political pressure they’ve been facing could absolutely turn into a major commercial headache too, if they don’t take tackling online muck spreading seriously.

tl;dr massive, mainstream success has a flip side. And boy is big tech going to feel it this year.

Facebook and Google both declined to comment on Unilever’s intervention.

Update: A Facebook spokesperson offered comment following publication, saying, “We fully support Unilever’s commitments and are working closely with them.”

Featured Image: Bryce Durbin/TechCrunch

Telegram and social media giants spanked in UK PM’s Davos speech


Social media giants have once again been singled out for a high-profile public spanking over social responsibility and illegal online content in Europe.

Giving a speech at the World Economic Forum in Davos, Switzerland this afternoon, UK prime minister Theresa May said: “Technology companies still need to do more in stepping up to their responsibilities for dealing with harmful and illegal online activity.

“Companies simply cannot stand by while their platforms are used to facilitate child abuse, modern slavery or the spreading of terrorist or extremist content.”

May has been banging this particular drum since becoming leader of her party (and the UK) in 2016. Last year she pressed her case to G7 leaders, and was today touting “progress” on international co-operation between governments and tech firms to “move further and faster in reducing the time it takes to remove terrorist content online and increase significantly their efforts to stop it being uploaded in the first place”.

But today she said more effort is needed.

“We need to go further, so that ultimately this content is removed automatically,” she told a Davos audience that included other world leaders and government ministers. “These companies have some of the best brains in the world. They must focus their brightest and best on meeting these fundamental social responsibilities.”

The European Commission has also been pushing tech firms to use automatic detection and filtering systems to pro-actively detect, remove and disable illegal online content — and earlier this month it warned it could seek to legislate at an EU level on the issue if companies aren’t deemed to be doing enough. Though critics of the EC’s trajectory here have warned it poses risks to freedom of speech and expression online.

On social media hate speech, at least, Facebook, Google and Twitter got an EC thumbs up for making “steady progress” in the Commission’s third review since the introduction of a voluntary Code of Conduct in 2016. And it now looks less likely that the EC will to push to legislate on that (as Germany already has).

May saved her most pointed naming and shaming for a single tech company: Telegram, implying the messaging app has become the app of choice for “terrorists and pedophiles”.

“We also need cross-industry responses because smaller platforms can quickly become home to criminals and terrorists,” she said. “We have seen that happen with Telegram, and we need to see more co-operation from smaller platforms like this. No one wants to be known as the terrorists’ platform. Or the first choice app for pedophiles.”

We reached out to Telegram founder Pavel Durov for comment — who, according to his Twitter, is also attending Davos — but at the time of writing he had not responded.

Ahead of May’s speech he did retweet a link to a blog post from last year, denouncing governments for seeking to undermine encryption and pointing out that terrorists can always build their own encrypted apps to circumvent government attempts to control apps. (He also included a new remark — tweeting: “Some politicians tend to blame tools for actions one can perform with these tools.”)

May went on to urge governments to look closely at the laws around social media companies and even consider whether there’s a case for new bespoke rules for regulating content on online platforms. Though it’s clear she has not yet made any decisions on that front.

“As governments it is also right that we look at the legal liability that social media companies have for the content shared on their sites,” she said. “The status quo is increasingly unsustainable as it becomes clear these platforms are no longer just passive hosts. But applying the existing standards of liability for publishers is not straightforward so we need to consider what is most appropriate for the modern economy.

“We are already working with our European and international partners, as well as the businesses themselves, to understand how we can make the existing frameworks and definitions work better and to assess in particular whether there is a case for developing a new definition for these platforms. We will continue to do so.”

She also urged investors and shareholders to find their social consciences and apply pressure to tech giants to take more societal responsibility in how they operate — raising the example of a pension and activist investment fund doing just that earlier this month, applying pressure on Facebook and Twitter over issues such as sexual harassment, fake news, hate speech and other forms of abuse.

“Investors can make a big difference here by ensuring trust and safety issues are being properly considered and I urge them to do so,” she said.

She also cited a recent survey conducted by PR firm Edelman — which suggests social media platforms are facing a global consumer trust crisis.

“The business model of a company is not sustainable if it does not command public support and consent,” she added.

Europe keeps up the pressure on social media over illegal content takedowns


The European Union’s executive body is continuing to pressure social media firms to get better at removing illegal content from their platforms before it has a chance to spread further online.

Currently there is a voluntary Code of Conduct on countering illegal online hate speech across the European Union. But the Commission has previously indicated it could seek to legislate if it feels companies aren’t doing enough.

After attending a meeting on the topic today, Andrus Ansip, the European Commissioner for Digital Single Market, tweeted to say the main areas tech firms need to be addressing are that “takedown should be fast, reliable, effective; pro-activity to detect, remove and disable content using automatic detection and filtering; adequate safeguards and counter notice”.

While the notion of tech giants effectively removing illegal content might be hard to object to in principle, such a laundry list of requirements underlines the complexities involved in pushing commercial businesses to execute context-based speech policing decisions in a hurry.

For example, a new social media hate speech law in Germany, which as of this month is being actively enforced, has already draw criticism and calls for its abolition after Twitter blocked a satirical magazine that had parodied anti-Muslim comments made by the far-right Alternative for Germany political party.

Another problematic aspect to the Commission’s push is it appears keen to bundle up a very wide spectrum of ‘illegal content’ into the same response category — apparently aiming to conflate issues as diverse as hate speech, terrorism, child exploitation and copyrighted content.

In September the EC put out a set of “guidelines and principles” which it said were aimed at pushing tech firms to be more pro-active about takedowns of illegal content, and specifically urging them to build tools to automate flagging and re-uploading of such content. But the measures were quickly criticized for being overly vague and posing a risk to freedom of expression online.

It’s not clear what kind of “adequate safeguards” Ansip is implying could be baked into the auto-detection and filtering systems the EC wants (we’ve asked and will update this story with any response). But there’s a clear risk that an over-emphasis on pushing tech giants to automate takedowns could result in censorship of controversial content on mainstream platforms.

There’s no public sign the Commission has picked up on these specific criticisms, with its latest missive flagging up both “violent and extremist content” but also “breaches of intellectual property rights” as targets.

Last fall the Commission said it would monitor tech giants’ progress vis-a-vis content takedowns over the next six months to decide whether to take additional measures — such a drafting legislation. Though it has also previously lauded progress being made.

In a statement yesterday, ahead of today’s meeting, the EC kept up the pressure on tech firms — calling for “more efforts and progress”:

The Commission is counting on online platforms to step up and speed up their efforts to tackle these threats quickly and comprehensively, including closer cooperation with national and enforcement authorities, increased sharing of know-how between online players and further action against the reappearance of illegal content.

We will continue to promote cooperation with social media companies to detect and remove terrorist and other illegal content online, and if necessary, propose legislation to complement the existing regulatory framework.

In the face of rising political pressure and a series of content-related scandals, both Google and Facebook last year announced they would be beefing up their content moderation teams by thousands of extra staff apiece.

Featured Image: nevodka/iStock Editorial