All posts in “extremist content”

Google to ramp up AI efforts to ID extremism on YouTube

Last week Facebook solicited help with what it dubbed “hard questions” — including how it should tackle the spread of terrorism propaganda on its platform.

Yesterday Google followed suit with its own public pronouncement, via an op-ed in the FT newspaper, explaining how it’s ramping up measures to tackle extremist content.

Both companies have been coming under increasing political pressure in Europe especially to do more to quash extremist content — with politicians including in the UK and Germany pointing the finger of blame at platforms such as YouTube for hosting hate speech and extremist content.

Europe has suffered a spate of terror attacks in recent years, with four in the UK alone since March. And governments in the UK and France are currently considering whether to introduce a new liability for tech platforms that fail to promptly remove terrorist content — arguing that terrorists are being radicalized with the help of such content.

Earlier this month the UK’s prime minister also called for international agreements between allied, democratic governments to “regulate cyberspace to prevent the spread of extremism and terrorist planning”.

While in Germany a proposal that includes big fines for social media firms that fail to take down hate speech has already gained government backing.

Besides the threat of fines being cast into law, there’s an additional commercial incentive for Google after YouTube faced an advertiser backlash earlier this year related to ads being displayed alongside extremist content, with several companies pulling their ads from the platform.

Google subsequently updated the platform’s guidelines to stop ads being served to controversial content, including videos containing “hateful content” and “incendiary and demeaning content” so their makers could no longer monetize the content via Google’s ad network. Although the company still needs to be able to identify such content for this measure to be successful.

Rather than requesting ideas for combating the spread of extremist content, as Facebook did last week, Google is simply stating what its plan of action is — detailing four additional steps it says it’s going to take, and conceding that more action is needed to limit the spread of violent extremism.

“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now,” writes Kent Walker, general counsel 

The four additional steps Walker lists are:

  1. increased use of machine learning technology to try to automatically identify “extremist and terrorism-related videos” — though the company cautions this “can be challenging”, pointing out that news networks can also broadcast terror attack videos, for example.”We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content,” writes Walker
  2. more independent (human) experts in YouTube’s Trusted Flagger program — aka people in the YouTube community who have a high accuracy rate for flagging problem content. Google says it will add 50 “expert NGOs”, in areas such as hate speech, self-harm and terrorism, to the existing list of 63 organizations that are already involved with flagging content, and will be offering “operational grants” to support them. It is also going to work with more counter-extremist groups to try to identify content that may be being used to radicalize and recruit extremists.
    “Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern,” writes Walker.
  3. a tougher stance on controversial videos that do clearly violate YouTube’s community guidelines — including by adding interstitial warnings to videos that contain inflammatory religious or supremacist content. Googles notes these videos also “will not be monetised, recommended or eligible for comments or user endorsements” — idea being they will have less engagement and be harder to find. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” writes Walker.
  4. expanding counter-radicalisation efforts by working with (other Alphabet division) Jigsaw to implement the “Redirect Method” more broadly across Europe. “This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages,” says Walker.

Despite increasing political pressure over extremism — and the attendant bad PR (not to mention threat of big fines) — Google is evidently hoping to retain its torch-bearing stance as a supporter of free speech by continuing to host controversial hate speech on its platform, just in a way that means it can’t be directly accused of providing violent individuals with a revenue stream. (Assuming it’s able to correctly identify all the problem content, of course.)

Whether this compromise will please either side on the ‘remove hate speech’ vs ‘retain free speech’ debate remains to be seen. The risk is it will please neither demographic.

The success of the approach will also stand or fall on how quickly and accurately Google is able to identify content deemed a problem — and policing user-generated content at such scale is a very hard problem.

It’s not clear exactly how many thousands of content reviewers Google employs at this point — we’ve asked and will update this post with any response.

Facebook recently added an additional 3,000 to its headcount, bringing the total number of reviewers to 7,500. CEO Mark Zuckerberg also wants to apply AI to the content identification issue but has previously said it’s unlikely to be able to do this successfully for “many years”.

Touching on what Google has been doing already to tackle extremist content, i.e. prior to these additional measures, Walker writes: “We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.”

Social media firms should face fines for hate speech failures, urge UK MPs

Social media giants Facebook, YouTube and Twitter have once again been accused of taking a “laissez-faire approach” to moderating hate speech content on their platforms.

This follows a stepping up of political rhetoric against social platforms in recent months in the UK, following a terror attack in London in March — after which Home Secretary Amber Rudd called for tech firms to do more to help block the spread of terrorist content online.

In a highly critical report looking at the spread of hate, abuse and extremism on Facebook, YouTube and Twitter, a UK parliamentary committee has suggested the government looks at imposing fines on social media forms for content moderation failures.

It’s also calling for a review of existing legislation to ensure clarity about how the law applies in this area.

“Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe,” the committee writes in the report.

Last month, the German government backed a draft law which includes proposals to fine social media firms up to €50 million if they fail to remove illegal hate speech within 24 hours after a complaint is made.

A Europe Union-wide Code of Conduct on swiftly removing hate speech, which was agreed between the Commission and social media giants a year ago, does not include any financial penalties for failure — but there are signs some European governments are becoming convinced of the need to legislate to force social media companies to improve their content moderation practices.

The UK Home Affairs committee report describes it as “shockingly easy” to find examples of material intended to stir up hatred against ethnic minorities on all three of the social media platforms it looked at for the report.

It urges social media companies to introduce “clear and well-funded arrangements for proactively identifying and removing illegal content — particularly dangerous terrorist content or material related to online child abuse”, calling for similar co-operation and investment to combat extremist content as the tech giants have already put into collaborating to tackle the spread of child abuse imagery online.

The committee’s investigation, which started in July last year following the murder of a UK MP by a far right extremist, was intended to be more wide-ranging. However, because the work was cut short by the UK government calling an early general election the committee says it has published specific findings on how social media companies are addressing hate crime and illegal content online — having taken evidence for this from Facebook, Google and Twitter.

“It is very clear to us from the evidence we have received that nowhere near enough is being done. The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe,” it writes.

“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.”

The committee flags multiple examples where it says extremist content was reported to the tech giants but these reports were not acted on adequately — calling out Google, especially, for “weakness and delays” in response to reports it made of illegal neo-Nazi propaganda on YouTube.

It also notes the three companies refused to tell it exactly how many people they employ to moderate content, and exactly how much they spend on content moderation.

The report makes especially uncomfortable reading for Google with the committee directly accusing it of profiting from hatred — arguing it has allowed YouTube to be “a platform from which extremists have generated revenue”, and pointing to the recent spate of advertisers pulling their marketing content from the platform after it was shown being displayed alongside extremist videos. Google responded to the high profile backlash from advertisers by pulling ads from certain types of content.

“Social media companies rely on their users to report extremist and hateful content for review by moderators. They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. We believe that it is unacceptable that social media companies are not taking greater responsibility for identifying illegal content themselves,” the committee writes.

“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.”

The committee suggests social media firms should have to contribute to the cost to the taxpayer of policing their platforms — pointing to how football teams are required to pay for policing in their stadiums and the immediate surrounding areas under UK law as an equivalent model.

It is also calling for social media firms to publish quarterly reports on their safeguarding efforts, including —

  • analysis of the number of reports received on prohibited content
  • how the companies responded to reports
  • what action is being taken to eliminate such content in the future

“It is in everyone’s interest, including the social media companies themselves, to find ways to reduce pernicious and illegal material,” the committee writes. “Transparent performance reports, published regularly, would be an effective method to drive up standards radically and we hope it would also encourage competition between platforms to find innovative solutions to these persistent problems. If they refuse to do so, we recommend that the government consult on requiring them to do so.”

The report, which is replete with pointed adjectives like “shocking”, “shameful”, “irresponsible” and “unacceptable”, follows several critical media reports in the UK which highlighted examples of moderation failures on social media platforms, and showed extremist and paedophilic content continuing to be spread on social media platforms.

Responding to the committee’s report, a YouTube spokesperson told us: “We take this issue very seriously. We’ve recently tightened our advertising policies and enforcement; made algorithmic updates; and are expanding our partnerships with specialist organisations working in this field. We’ll continue to work hard to tackle these challenging and complex problems”.

In a statement, Simon Milner, director of policy at Facebook, added:  “Nothing is more important to us than people’s safety on Facebook. That is why we have quick and easy ways for people to report content, so that we can review, and if necessary remove, it from our platform. We agree with the Committee that there is more we can do to disrupt people wanting to spread hate and extremism online. That’s why we are working closely with partners, including experts at Kings College, London, and at the Institute for Strategic Dialogue, to help us improve the effectiveness of our approach. We look forward to engaging with the new Government and parliament on these important issues after the election.”

Nick Pickles, Twitter’s UK head of public policy, provided this statement: “Our Rules clearly stipulate that we do not tolerate hateful conduct and abuse on Twitter. As well as taking action on accounts when they’re reported to us by users, we’ve significantly expanded the scale of our efforts across a number of key areas. From introducing a range of brand new tools to combat abuse, to expanding and retraining our support teams, we’re moving at pace and tracking our progress in real-time. We’re also investing heavily in our technology in order to remove accounts who deliberately misuse our platform for the sole purpose of abusing or harassing others. It’s important to note this is an ongoing process as we listen to the direct feedback of our users and move quickly in the pursuit of our mission to improve Twitter for everyone.”

The committee says it hopes the report will inform the early decisions of the next government — with the UK general election due to take place on June 8 — and feed into “immediate work” by the three social platforms to be more pro-active about tackling extremist content.

Commenting on the publication of the report yesterday, Home Secretary Amber Rudd told the BBC she expected to see “early and effective action” from the tech giants.

Featured Image: Twin Design/Shutterstock

UK wants tech firms to build tools to block terrorist content

UK Home Secretary Amber Rudd is holding talks with several major Internet companies today to urge them to be more proactive about tackling the spread of extremist content online. Companies in attendance include Google, Microsoft, Twitter and Facebook, along with some smaller Internet companies.

We’ve contacted the four named companies for comment and will update this story with any response.

Writing in the Telegraph newspaper on Saturday, in the wake of last week’s terror attack in London, Rudd said the UK government will shortly be setting out an updated counterterrorism strategy that will prioritize doing more to tackle radicalisation online.

“Of paramount importance in this strategy will be how we tackle radicalisation online, and provide a counter-narrative to the vile material being spewed out by the likes of Daesh, and extreme Right-wing groups such as National Action, which I made illegal last year,” she wrote. “Each attack confirms again the role that the internet is playing in serving as a conduit, inciting and inspiring violence, and spreading extremist ideology of all kinds.”

Leaning on tech firms to build tools appears to be a key plank of that forthcoming strategy.

A government source told us that Rudd will urge web companies today to use technical solutions to automatically identify terrorist content before it can be widely disseminated.

We also understand the Home Secretary wants the companies to form an industry-wide body to take greater responsibility for tackling extremist content online — which is a slightly odd ask, given Facebook, Microsoft, Twitter and YouTube already announced such a collaboration, in December last year (including creating a shared industry database for speeding up identification and removal of terrorist content).

Perhaps Rudd wants more Internet companies to be part of the collaboration. Or else more effective techniques for identifying and removing content at speed to be developed.

At today’s roundtable we’re told Rudd will also raise concerns about encryption — another technology she criticized in the wake of last week’s attack, arguing that law enforcement agencies must be able to “get into situations like encrypted WhatsApp”.

Such calls are of course hugely controversial, given how encryption is used to safeguard data from exploitation by bad actors — the UK government itself utilizes encryption technology, as you’d expect.

So it remains to be seen whether Rudd’s public call for encrypted data to be accessible to law enforcement agencies constitutes the beginning of a serious clampdown on end-to-end encryption in the UK (NB: the government has already given itself powers to limit companies’ use of the tech, via last year’s Investigatory Powers Act) — or merely a strategy to apply high profile pressure to social media companies in try to strong-arm them into doing more about removing extremist content from their public networks.

We understand the main thrust of today’s discussions will certainly be on the latter issue, with the government seeking greater co-operation from social platforms in combating the spread of terrorist propaganda. Encryption is set to be discussed in further separate discussions, we are told.

In her Telegraph article, Rudd argued that the government cannot fight terrorism without the help of Internet companies, big and small.

“We need the help of social media companies, the Googles, the Twitters, the Facebooks of this world. And the smaller ones, too: platforms such as Telegram, WordPress and We need them to take a more proactive and leading role in tackling the terrorist abuse of their platforms. We need them to develop further technology solutions. We need them to set up an industry-wide forum to address the global threat,” she wrote.

One stark irony of the Brexit process — which got under way in the UK this Wednesday, when the government formally informed the European Union of its intention to leave the bloc — is that security cooperation between the UK and the EU is apparently being used as a bargaining chip, with the UK government warning it may no longer share data with the EU’s central law enforcement agency in future if there is no Brexit deal.

Which does rather throw a sickly cast o’er Rudd’s call for Internet companies to be more proactive in fighting terrorism.

Not all of the companies Rudd called out in her article will be in attendance at today’s meeting. Pavel Durov, co-founder of the messaging app Telegram, confirmed to TechCrunch that it will not be there, for instance. The messaging app has frequently been criticized as a ‘tool of choice’ for terrorists, although Durov has stood firm in his defense of encryption — arguing that users’ right to privacy is more important than “our fear of bad things happening”.

Telegram has today announced the rollout of end-to-end encrypted voice calls to its platform, doubling down on one of Rudd’s technologies of concern (albeit, Telegram’s ‘homebrew’ encryption is not the same as the respected Signal Protocol, used by WhatsApp, and has taken heavy criticism from security researchers).

But on the public propaganda front, Telegram does already act to remove terrorist content being spread via its public channels. Earlier this week it published a blog post defending the role of end-to-end encryption in safeguarding people’s privacy and freedom of speech, and accusing the mass media of being the priory conduit through which terrorist propaganda spreads.

“Terrorist channels still pop up [on Telegram] — just as they do on other networks — but they are reported almost immediately and are shut down within hours, well before they can get any traction,” it added.

Meanwhile, in a biannual Transparency Report published last week, Twitter revealed it had suspended a total of 636,248 accounts, between August 1, 2015 through to December 31, 2016, for violations related to the promotion of terrorism — saying the majority of the accounts (74 percent) were identified by its own “internal, proprietary spam-fighting tools”, i.e. rather than via user reports.

Twitter’s report underlines the scale of the challenge posed by extremist content spread via social platforms, given the volume of content uploads involved — which are orders of magnitude greater on more popular social platforms like Facebook and YouTube, meaning there’s more material to sift through to locate and eject any extremist material.

In February, Facebook CEO Mark Zuckerberg also discussed the issue of terrorist content online, and specifically his hope that AI will play a larger role in future to tackle this challenge, although he also cautioned that “it will take many years to fully develop these systems”.

“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization. This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide,” he wrote then.

In an earlier draft of the open letter, Zuckerberg suggested AI could even be used to identify terrorists plotting attacks via private channels — likely via analysis of account behavior patterns, according to a source, not by backdooring encryption (the company already uses machine learning for fighting spam and malware on the end-to-end encrypted WhatsApp, for example).

His edited comment on private channels suggests there are metadata-focused alternative techniques that governments could pursue to glean intel from within encrypted apps without needing to demand access to the content itself — albeit, political pressure may well be on the social platforms themselves to be doing the leg work there.

Rudd is clearly pushing Internet companies to do more and do it quicker when it comes to removing extremist content. So Zuckerberg’s timeframe of a potential AI fix “many years” ahead likely won’t wash. Political timeframes tend to be much tighter.

She’s not the only politician stepping up the rhetoric either. Social media giants are facing growing pressure in Germany, which earlier this month proposed a new law for social media platforms to deal with hate speech complaints. The country previously secured agreements from the companies to remove illegal content within 24 hours of a complaint being made, but the government has accused Facebook and Twitter especially of not taking user complaints seriously enough — hence, it says, it’s going down a legislative route now.

A report in the Telegraph last week suggested the UK government is also considering a new law to prosecute Internet companies if terrorist content is not immediately taken down when reported. Although ministers were apparently questioning how such a law could be enforced when companies are based overseas, as indeed most of the Internet companies in question are.

Another possibility: the Home Office was selectively leaking a threat of legislation ahead of today’s meeting, to try to encourage Internet companies to come up with alternative fixes.

Yesterday, digital and humans rights groups including Privacy International, the Open Rights Group, Liberty and Human Rights Watch called on the UK government to be “transparent” and “open” about the discussions it’s having with Internet companies. “Private, informal agreements are not consistent with open, democratic governance,” they wrote.

“Government requests directed to tech companies to take down content is de facto state censorship. Some requests may be entirely legitimate but the sheer volumes make us highly concerned about their validity and the accountability of the processes.”

“We need assurances that only illegal material will be sought out by government officials and taken down by tech companies,” they added. “Transparency and judicial oversight are needed over government takedown requests.”

The group also called out Rudd for not publicly referencing existing powers at the government’s disposal, and expressed concern that any “technological limitations to encryption” they seek could have damaging implications for citizens’ “personal security”.

They wrote:

We also note that Ms Rudd may seek to use Technical Capability Notices (TCNs) to enforce changes [to encryption]; and these would require secrecy. We are therefore surprised that public comments by Ms Rudd have not referenced her existing powers.

We do not believe that the TCN process is robust enough in any case, nor that it should be applied to non-UK providers, and are concerned about the precedent that may be set by companies complying with a government over requests like these.

The Home Office did not respond to a request for comment on the group’s open letter, nor respond to specific questions about its discussions today with Internet companies, but a government source told us that the meeting is private.

Earlier this week Rudd faced ridicule on social media, and suggestions from tech industry figures that she does not fully understand the workings of the technologies she’s calling out, following comments made during a BBC interview on Sunday — in which she said people in the technology industry understand “the necessary hashtags to stop this stuff even being put up”.

The more likely explanation is that the undoubtedly well-briefed Home Secretary is playing politics in an attempt to gain an edge with a group of very powerful, overseas-based Internet giants.