All posts in “terrorism”

Twitter claims more progress on squeezing terrorist content

Twitter has put out its latest Transparency Report providing an update on how many terrorist accounts it has suspended on its platform — with a cumulative 1.2 million+ suspensions since August 2015.

During the reporting period of July 1, 2017 through December 31, 2017 — for this, Twitter’s 12th Transparency Report — the company says a total of 274,460 accounts were permanently suspended for violations related to the promotion of terrorism.

“This is down 8.4% from the volume shared in the previous reporting period and is the second consecutive reporting period in which we’ve seen a drop in the number of accounts being suspended for this reason,” it writes. “We continue to see the positive, significant impact of years of hard work making our site an undesirable place for those seeking to promote terrorism, resulting in this type of activity increasingly shifting away from Twitter.”

Six months ago the company claimed big wins in squashing terrorist activity on its platform — attributing drops in reports of pro-terrorism accounts then to the success of in-house tech tools in driving terrorist activity off its platform (and perhaps inevitably rerouting it towards alternative platforms — Telegram being chief among them, according to experts on online extremism).

At that time Twitter reported a total of 299,649 pro-terrorism accounts had been suspended — which it said was a 20 per cent drop on figures reported for July through December 2016.

So the size of the drops are also shrinking. Though it’s suggesting that’s because it’s winning the battle to discourage terrorists from trying in the first place.

For its latest reporting period, ending December 2017, Twitter says 93% of the accounts were flagged by its internal tech tools — with 74% of those also suspended before their first tweet, i.e. before they’d been able to spread any terrorist propaganda.

Which means that around a quarter of the pro-terrorist accounts did manage to get out at least one terror tweet.

This proportion is essentially unchanged since the last report period (when Twitter reported suspending 75% before their first tweet) — so whatever tools it’s using to automate terror account identification and blocking appear to be in a steady state, rather than gaining in ability to pre-filter terrorist content.

Twitter also specifies that government reports of violations related to the promotion of terrorism represent less than 0.2% of all suspensions in the most recent reporting period — or 597 to be exact.

As with its prior transparency report, a far larger number of Twitter accounts are being reported by governments for “abusive behavior” — which refers to long-standing problems on Twitter’s platform such as hate speech, racism, misogyny and trolling.

And in December a Twitter policy staffer was roasted by UK MPs during a select committee session after the company was again shown failing to remove violent, threatening and racist tweets — which committee staffers had reported months earlier in that case.

Twitter’s latest Transparency Report specifies that governments reported 6,254 Twitter accounts for abusive behavior — yet the company only actioned a quarter of these reports.

That’s still up on the prior reporting period, though, when it reported actioning a paltry 12% of these type of reports.

The issue of abuse and hate speech on online platforms generally has rocketed up the political agenda in recent years, especially in Europe — where Germany now has a tough new law to regulate takedowns.

Platforms’ content moderation policies certainly remain a bone of contention for governments and lawmakers.

Last month the European Commission set out a new rule of thumb for social media platforms — saying it wants them to take down illegal content within an hour of it being reported.

This is not legislation yet, but the threat of EU-wide laws being drafted to regulate content takedowns remains a discussion topic — to encourage platforms to improve performance voluntarily.

Where terrorist content specifically is concerned, the Commission has also been pushing for increased used by tech firms of what it calls “proactive measures”, including “automated detection”.

And in February the UK government also revealed it had commissioned a local AI firm to build an extremist content blocking tool — saying it could decide to force companies to use it.

So political pressure remains especially high on that front.

Returning to abusive content, Twitter’s report specifies that the majority of the tweets and accounts reported to it by governments which it did remove violated its rules in the following areas: impersonation (66%), harassment (16%), and hateful conduct (12%).

This is an interesting shift on the mix from the last reported period when Twitter said content was removed for: harassment (37%), hateful conduct (35%), and impersonation (13%).

It’s difficult to interpret exactly what that development might mean. One possibility is that impersonation could cover disinformation agents, such as Kremlin bots, which Twitter has being suspending in recent months as part of investigations into election interference — an issue that’s been shown to be a problem across social media, from Facebook to Tumblr.

Governments may also have become more focused on reporting accounts to Twitter that they believe are wrappers for foreign agents to spread false information to try to meddle with democratic processes.

In January, for example, the UK government announced it would be setting up a civil service unit to combat state-led disinformation campaigns.

And removing an account that’s been identified as a fake — with the help of government intelligence — is perhaps easier for Twitter than judging whether a particular piece of robust speech might have crossed the line into harassment or hate speech.

Judging the health of conversations on its platform is also something the company recently asked outsiders to help it with. So it doesn’t appear overly confident in making those kind of judgement calls.

Unilever warns social media to clean up “toxic” content


Consumer goods giant Unilever, a maker of branded soaps, foodstuffs and personal care items and also one of the world’s biggest online advertisers, has fired a warning shot across the bows of social media giants by threatening to pull ads from digital platforms if they don’t do more to mitigate the spread of what it dubs “toxic” online content — be it fake news, terrorism or child exploitation.

“It is critical that our brands remain not only in a safe environment, but a suitable one,” CMO Keith Weed is expected to say at the annual Interactive Advertising Bureau conference in California today, according to extracts from the speech provided to us ahead of delivery. “Unilever, as a trusted advertiser, do not want to advertise on platforms which do not make a positive contribution to society.”

The remarks echo comments made last month by UK prime minister Theresa May who singled out social media firms for acute censure, saying they “simply cannot stand by while their platforms are used to facilitate child abusemodern slavery or the spreading of terrorist or extremist content”.

Unilever’s Weed is expected to argue that consumers are worried about “fraudulent practice, fake news, and Russians influencing the U.S. election”, and are sensitive to the brands they buy becoming tainted by associated with ad placement alongside awful stuff like terrorist propaganda and content that exploits children.

“2018 is either the year of techlash, where the world turns on the tech giants — and we have seen some of this already — or the year of trust. The year where we collectively rebuild trust back in our systems and our society,” he will argue.

Online ad giants Facebook and Google have increasingly found themselves on the hook for enabling the spread of socially divisive, offensive and at times out-and-out illegal content via their platforms — in no small part as a consequence of the popularity of their content-sharing hubs.

While the Internet is filled with all sorts of awful stuff, in its darkest corners, the mainstream reach of platforms like Facebook and YouTube puts them squarely in the political firing line for all sorts of content issues — from political disinformation to socially divisive hate speech.

The fact Facebook and Google are also the chief financial beneficiaries of online ad spending — together accounting for around 60 per cent of online ad spending in the US, for example — makes it difficult for them to dodge the charge that their businesses directly benefit from divisive and exploitative content — all the way from clickbait to fake news to full blown online extremism.

Facebook’s 2016 dismissal of concerns about fake news impacting democracy as a “pretty crazy idea” has certainly not aged well. And CEO Mark Zuckerberg has since admitted his platform is broken and made it his personal goal for 2018 to “fix Facebook“.

Both companies faced a growing backlash last year — with a number of advertisers and brands pulling ads from YouTube over concerns about the types of content that their marketing messages were being served alongside, thanks to the programmatic (i.e. automatic) nature of the ad placement. The platform also took renewed flak for the type of content it routinely serves up to kids.

While Facebook got a political grilling over hosting Kremlin disinformation — though Russia’s online dis-ops clearly sprawl across multiple tech platforms. But again, Facebook’s massive reach gifts it a greater share of blame — as the most effective channel (at least that we currently know of) for political disinformation muck spreading. (Last fall, for example, it was forced to admit that ~80,000 pieces of Russian-backed content may have been viewed by 126M Facebook users during the 2016 US election.)

Facebook has been working on adding ad transparency tools to its platform — though it remains to be seen whether it can do enough to be judged to be effectively self regulating. It doesn’t have the greatest record on that front, frankly speaking.

Last year Google also responded with alacrity to boycotts by its own advertisers, saying it would expand controls for brands to give them more say over where their ads appeared on YouTube, and by taking “a tougher stance on hateful, offensive and derogatory content” — including demonitizing more types of videos. And has made a policy change on known terrorists’ content. Though it has continued to disappoint politicians demanding better moderation.

As part of its attempts to de-risk the user generated content that its business relies on, and thus avoid the risk of further spooking already spooked advertisers, Google even recently began removing YouTube videos of the so-called ‘Tide Pod Challenge’ — i.e. where people film themselves trying to consume laundry detergent. Videos which it had previously left up, despite having a policy against content that encourages dangerous activities.

Incidentally Tide Pods aren’t a Unilever brand but their parent company, Procter & Gamble, also roasted social media firms last year — calling for them to “grow up” and slamming the “non-traditional media supply chain” for being “murky at best, and fraudulent at worst”.

Unilever’s Weed also takes aim at ad fraud in his speech, noting how it’s partnered with IBM to pilot a new blockchain tech for advertising — which he touts as having “the potential to drastically reduce advertising fraud by recording how media is purchased, delivered and interacted with by target audiences, providing reliable measurement metrics”. (Can blockchain really fix click fraud? That Unilever is actively entertaining the idea arguably shows how far trust levels in the digital ad space have fallen.)

But the main message is tilted at social media giants’ need to “build social responsibility” — and invest in trust and transparency to avoid damaging the precious substance known as ‘brand trust’ which the tech giants’ revenue-generating digital advertisers depend on.

Though, blockchain experiments aside, Unilever seems rather less publicly clear on exactly what it thinks tech giants should do to vanquish the toxic content their business models have (inadvertently or otherwise) been financially incentivizing.

Governments in Europe have been leaning on social media giants to accelerate development of tech tools that can automatically flag and even remove problem content (such as hate speech) before it has a chance to spread — though that approach is hardly uncontroversial, and critics argue it whiffs of censorship.

Germany has even passed a hate speech social media law, introducing fines of up to €50M for platforms that fail to promptly remove illegal content.

While, earlier this month, Germany’s national competition regulator also announced a probe of the online ad sector — citing concerns that a lack of transparency could be skewing market conditions.

Weed’s message to social media can be summed up as: This is a problem we’ll work with you to fix, but you need to agree to work on fixing it. “As a brand-led business, Unilever needs its consumers to have trust in our brands,” he’ll say. “We can’t do anything to damage that trust -– including the choice of channels and platforms we use. So, 2018 is the year when social media must win trust back.”

Unilever is making three specific “commitments” relating to its digital media supply chain:

  1. that it will not invest in “platforms or environments that do not protect our children or which create division in society, and promote anger or hate”, further emphasizing: “We will prioritise investing only in responsible platforms that are committed to creating a positive impact in society”
  2. that it is committed to creating “responsible content” — with an initial focus on tackling gender stereotypes in advertising
  3. that it will push for what it dubs “responsible infrastructure”, saying it will only partner with organizations “which are committed to creating better digital infrastructure, such as aligning around one measurement system and improving the consumer experience”

So, while the company is not yet issuing an explicit ultimatum to Facebook and Google, it’s certainly putting them on notice that the political pressure they’ve been facing could absolutely turn into a major commercial headache too, if they don’t take tackling online muck spreading seriously.

tl;dr massive, mainstream success has a flip side. And boy is big tech going to feel it this year.

Facebook and Google both declined to comment on Unilever’s intervention.

Update: A Facebook spokesperson offered comment following publication, saying, “We fully support Unilever’s commitments and are working closely with them.”

Featured Image: Bryce Durbin/TechCrunch

Telegram and social media giants spanked in UK PM’s Davos speech


Social media giants have once again been singled out for a high-profile public spanking over social responsibility and illegal online content in Europe.

Giving a speech at the World Economic Forum in Davos, Switzerland this afternoon, UK prime minister Theresa May said: “Technology companies still need to do more in stepping up to their responsibilities for dealing with harmful and illegal online activity.

“Companies simply cannot stand by while their platforms are used to facilitate child abuse, modern slavery or the spreading of terrorist or extremist content.”

May has been banging this particular drum since becoming leader of her party (and the UK) in 2016. Last year she pressed her case to G7 leaders, and was today touting “progress” on international co-operation between governments and tech firms to “move further and faster in reducing the time it takes to remove terrorist content online and increase significantly their efforts to stop it being uploaded in the first place”.

But today she said more effort is needed.

“We need to go further, so that ultimately this content is removed automatically,” she told a Davos audience that included other world leaders and government ministers. “These companies have some of the best brains in the world. They must focus their brightest and best on meeting these fundamental social responsibilities.”

The European Commission has also been pushing tech firms to use automatic detection and filtering systems to pro-actively detect, remove and disable illegal online content — and earlier this month it warned it could seek to legislate at an EU level on the issue if companies aren’t deemed to be doing enough. Though critics of the EC’s trajectory here have warned it poses risks to freedom of speech and expression online.

On social media hate speech, at least, Facebook, Google and Twitter got an EC thumbs up for making “steady progress” in the Commission’s third review since the introduction of a voluntary Code of Conduct in 2016. And it now looks less likely that the EC will to push to legislate on that (as Germany already has).

May saved her most pointed naming and shaming for a single tech company: Telegram, implying the messaging app has become the app of choice for “terrorists and pedophiles”.

“We also need cross-industry responses because smaller platforms can quickly become home to criminals and terrorists,” she said. “We have seen that happen with Telegram, and we need to see more co-operation from smaller platforms like this. No one wants to be known as the terrorists’ platform. Or the first choice app for pedophiles.”

We reached out to Telegram founder Pavel Durov for comment — who, according to his Twitter, is also attending Davos — but at the time of writing he had not responded.

Ahead of May’s speech he did retweet a link to a blog post from last year, denouncing governments for seeking to undermine encryption and pointing out that terrorists can always build their own encrypted apps to circumvent government attempts to control apps. (He also included a new remark — tweeting: “Some politicians tend to blame tools for actions one can perform with these tools.”)

May went on to urge governments to look closely at the laws around social media companies and even consider whether there’s a case for new bespoke rules for regulating content on online platforms. Though it’s clear she has not yet made any decisions on that front.

“As governments it is also right that we look at the legal liability that social media companies have for the content shared on their sites,” she said. “The status quo is increasingly unsustainable as it becomes clear these platforms are no longer just passive hosts. But applying the existing standards of liability for publishers is not straightforward so we need to consider what is most appropriate for the modern economy.

“We are already working with our European and international partners, as well as the businesses themselves, to understand how we can make the existing frameworks and definitions work better and to assess in particular whether there is a case for developing a new definition for these platforms. We will continue to do so.”

She also urged investors and shareholders to find their social consciences and apply pressure to tech giants to take more societal responsibility in how they operate — raising the example of a pension and activist investment fund doing just that earlier this month, applying pressure on Facebook and Twitter over issues such as sexual harassment, fake news, hate speech and other forms of abuse.

“Investors can make a big difference here by ensuring trust and safety issues are being properly considered and I urge them to do so,” she said.

She also cited a recent survey conducted by PR firm Edelman — which suggests social media platforms are facing a global consumer trust crisis.

“The business model of a company is not sustainable if it does not command public support and consent,” she added.

Europe keeps up the pressure on social media over illegal content takedowns


The European Union’s executive body is continuing to pressure social media firms to get better at removing illegal content from their platforms before it has a chance to spread further online.

Currently there is a voluntary Code of Conduct on countering illegal online hate speech across the European Union. But the Commission has previously indicated it could seek to legislate if it feels companies aren’t doing enough.

After attending a meeting on the topic today, Andrus Ansip, the European Commissioner for Digital Single Market, tweeted to say the main areas tech firms need to be addressing are that “takedown should be fast, reliable, effective; pro-activity to detect, remove and disable content using automatic detection and filtering; adequate safeguards and counter notice”.

While the notion of tech giants effectively removing illegal content might be hard to object to in principle, such a laundry list of requirements underlines the complexities involved in pushing commercial businesses to execute context-based speech policing decisions in a hurry.

For example, a new social media hate speech law in Germany, which as of this month is being actively enforced, has already draw criticism and calls for its abolition after Twitter blocked a satirical magazine that had parodied anti-Muslim comments made by the far-right Alternative for Germany political party.

Another problematic aspect to the Commission’s push is it appears keen to bundle up a very wide spectrum of ‘illegal content’ into the same response category — apparently aiming to conflate issues as diverse as hate speech, terrorism, child exploitation and copyrighted content.

In September the EC put out a set of “guidelines and principles” which it said were aimed at pushing tech firms to be more pro-active about takedowns of illegal content, and specifically urging them to build tools to automate flagging and re-uploading of such content. But the measures were quickly criticized for being overly vague and posing a risk to freedom of expression online.

It’s not clear what kind of “adequate safeguards” Ansip is implying could be baked into the auto-detection and filtering systems the EC wants (we’ve asked and will update this story with any response). But there’s a clear risk that an over-emphasis on pushing tech giants to automate takedowns could result in censorship of controversial content on mainstream platforms.

There’s no public sign the Commission has picked up on these specific criticisms, with its latest missive flagging up both “violent and extremist content” but also “breaches of intellectual property rights” as targets.

Last fall the Commission said it would monitor tech giants’ progress vis-a-vis content takedowns over the next six months to decide whether to take additional measures — such a drafting legislation. Though it has also previously lauded progress being made.

In a statement yesterday, ahead of today’s meeting, the EC kept up the pressure on tech firms — calling for “more efforts and progress”:

The Commission is counting on online platforms to step up and speed up their efforts to tackle these threats quickly and comprehensively, including closer cooperation with national and enforcement authorities, increased sharing of know-how between online players and further action against the reappearance of illegal content.

We will continue to promote cooperation with social media companies to detect and remove terrorist and other illegal content online, and if necessary, propose legislation to complement the existing regulatory framework.

In the face of rising political pressure and a series of content-related scandals, both Google and Facebook last year announced they would be beefing up their content moderation teams by thousands of extra staff apiece.

Featured Image: nevodka/iStock Editorial

YouTube: More AI can fix AI-generated ‘bubbles of hate’


Facebook, YouTube and Twitter faced another online hate crime grilling today by UK parliamentarians visibly frustrated at their continued failures to apply their own community guidelines and take down reported hate speech.

The UK government has this year pushed to raise online radicalization and extremist content as a G7 priority — and has been pushing for takedown timeframes for extremist content to shrink radically.

While the broader issue of online hate speech has continued to be a hot button political issue, especially in Europe — with Germany passing a social media hate speech law in October. And the European Union’s executive body pushing for social media firms to automate the flagging of illegal content to accelerate takedowns.

In May, the UK’s Home Affairs Committee also urged the government to consider a regime of fines for social media content moderation failures — accusing tech giants of taking a “laissez-faire approach” to moderating hate speech content on their platforms.

It revisited their performance in another public evidence sessions today.

“What it is that we have to do to get you to take it down?”

Addressing Twitter, Home Affairs Committee chair Yvette Cooper said her staff had reported a series of violent, threatening and racist tweets via the platform’s standard reporting systems in August — many of which still had not been removed, months on.

She did not try to hide her exasperation as she went on to question why certain antisemitic tweets previously raised by the committee during an earlier public evidence session had also still not been removed — despite Twitter’s Nick Pickles agreeing at the time that they broke its community standards.

“I’m kind of wondering what it is we have to do,” said Cooper. “We sat in this committee in a public hearing and raised a clearly vile antisemitic tweet with your organization… but it is still there on the platform — what it is that we have to do to get you to take it down?”

Twitter’s EMEA VP for public policy and communications, Sinead McSweeney, who was fielding questions on behalf of the company this time, agreed that the tweets in question violated Twitter’s hate speech rules but said she was unable to provide an explanation for why they had not been taken down.

She noted the company has newly tightened its rules on hate speech — and said specifically that it has raised the priority of bystander reports, whereas previously it would have placed more priority on a report if the person who was the target of the hate was also the one reporting it.

“We haven’t been good enough at this,” she said. “Not only we haven’t been good enough at actioning, but we haven’t been good enough at telling people when we have actioned. And that is something that — particularly over the last six months — we have worked very hard to change… so you will definitely see people getting much, much more transparent communication at the individual level and much, much more action.”

“We are now taking actions against 10 times more accounts than we did in the past,” she added.

Cooper then turned her fire on Facebook, questioning the social media giant’s public policy director, Simon Milner, about Facebook pages containing violent anti-Islamic imagery, including one that appeared to be encouraging the bombing of Mecca, and pages set up to share photos of schoolgirls for the purposes of sexual gratification.

He claimed Facebook has fixed the problem of “lurid” comments being able to posted on otherwise innocent photographs of children shared on its platform — something YouTube has also recently been called out for — telling the committee: “That was a fundamental problem in our review process that has now been fixed.”

Cooper then asked whether the company is living up to its own community standards — which Milner agreed do not permit people or organizations that promote hate against protected groups to have a presence on its platform. “Do you think that you are strong enough on Islamophobic organizations and groups and individuals?” she asked.

Milner avoided answering Cooper’s general question, instead narrowing his response to the specific individual page the committee had flagged — saying it was “not obviously run by a group” and that Facebook had taken down the specific violent image highlighted by the committee but not the page itself.

“The content is disturbing but it is very much focused on the religion of Islam, not on Muslims,” he added.

This week a decision by Twitter to close the accounts of far right group Britain First has swiveled a critical spotlight on Facebook — as the company continues to host the same group’s page, apparently preferring to selectively remove individual posts even though Facebook’s community standards forbid hate groups if they target people with protected characteristics (such as religion, race and ethnicity).

Cooper appeared to miss an opportunity to press Milner on the specific point — and earlier today the company declined to respond when we asked why it has not banned Britain First.

Giving an update earlier in the session, Milner told the committee that Facebook now employs over 7,500 people to review content — having announced a 3,000 bump in headcount earlier this year — and said that overall it has “around 10,000 people working in safety and security” — a figure he said it will be doubling by the end of 2018.

Areas where he said Facebook has made the most progress vis-a-vis content moderation are around terrorism, and nudity and pornography (which he noted is not permitted on the platform).

Google’s Nicklas Berild Lundblad, EMEA VP for public policy, was also attending the session to field questions about YouTube — and Cooper initially raised the issue of racist comments not being taken down despite being reported.

He said the company is hoping to be able to use AI to automatically pick up these types of comments. “One of the things that we want to get to is a situation in which we can actively use machines in order to scan comments for attacks like these and remove them,” he said.

Cooper pressed him on why certain comments reported to it by the committee had still not been removed — and he suggested reviewers might still be looking at a minority of the comments in question.

She flagged a comment calling for an individual to be “put down” — asking why that specifically had not been removed. Lundblad agreed it appeared to be in violation of YouTube’s guidelines but appeared unable to provide an explanation for why it was still there.

Cooper then asked why a video, made by the neo-nazi group National Action — which is proscribed as a terrorist group and banned in the UK, had kept reappearing on YouTube after it had been reported and taken down — even after the committee raised the issue with senior company executives.

Eventually, after “about eight months” of the video being repeatedly reposted on different accounts, she said it finally appears to have gone.

But she contrasted this sluggish response with the speed and alacrity with which Google removes copyrighted content from YouTube. “Why did it take that much effort, and that long just to get one video removed?” she asked.

“I can understand that’s disappointing,” responded Lundblad. “They’re sometimes manipulated so you have to figure out how they manipulated them to take the new versions down.

“And we’re now looking at removing them faster and faster. We’ve removed 135 of these videos some of them within a few hours with no more than 5 views and we’re committed to making sure this improves.”

He also claimed the rollout of machine learning technology has helped YouTube improve its takedown performance, saying: “I think that we will be closing that gap with the help of machines and I’m happy to review this in due time.”

“I really am sorry about the individual example,” he added.

Pressed again on why such a discrepancy existed between the speed of YouTube copyright takedowns and terrorist takedowns, he responded: “I think that we’ve seen a sea change this year” — flagging the committee’s contribution to raising the profile of the problem and saying that as a result of increased political pressure Google has recently expanded its use of machine learning to additional types of content takedowns.

In June, facing rising political pressure, the company announced it would be ramping up AI efforts to try to speed up the process of identifying extremist content on YouTube.

After Lundblad’s remarks, Cooper then pointed out that the same video still remains online on Facebook and Twitter — querying why all threee companies haven’t been sharing data about this type of proscribed content, despite their previously announced counterterrorism data-sharing partnership.

Milner said the hash database they jointly contribute to is currently limited to just two global terrorism organizations: ISIS and Al-Qaeda, so would not therefore be picking up content produced by banned neo-nazi or far right extremist groups.

Pressed again by Cooper reiterating that National Action is a banned group in the UK, Milner said Facebook has to-date focused its counterterrorism takedown efforts on content produced by ISIS and Al-Qaeda, claiming they are “the most extreme purveyors of this kind of viral approach to distributing their propaganda”.

“That’s why we’ve addressed them first and foremost,” he added. “It doesn’t mean we’re going to stop there but there is a difference between the kind of content they’re producing which is more often clearly illegal.”

“It’s incomprehensible that you wouldn’t be sharing this about other forms of violent extremism and terrorism as well as ISIS and Islamist extremism,” responded Cooper.

“You’re actually actively recommending… racist material”

She then moved on to interrogate the companies on the problem of ‘algorithmic extremism’ — saying that after her searches for the National Action video her YouTube recommendations included a series of far right and racist videos and channels.

“Why am I getting recommendations from YouTube for some pretty horrible organizations,” she asked?

Lundblad agreed YouTube’s recommendation engine “clearly becomes a problem” in certain types of offensive content scenarios — “where you don’t want people to end up in a bubble of hate, for example”. But said YouTube is working on ways to remove certain videos from being surfaceable via its recommended engine.

“One of the things that we are doing… is we’re trying to find states in which videos will have no recommendations and not impact recommendations at all — so we’re limiting the features,” he said. “Which means that those videos will not have recommendations, they will be behind an interstitial, they will not have any comments etc.

“Our way to then address that is to achieve the scale we need, make sure we use machine learning, identify videos like this, limit their features and make sure that they don’t turn up in the recommendations as well.”

So why hasn’t YouTube already put a channel like Red Ice TV into limited state yet, asked Cooper, naming one of the channels the recommendation engine had been pushing her to view? “It’s not simply that you haven’t removed it… You’re actually actively recommending it to me — you are actually actively recommending what is effectively racist material [to] people.”

Lundblad said he would ask that the channel be looked at — and get back to the committee with a “good and solid response”.

“As I said we are looking at how we can scale those new policies we have out across areas like hate speech and racism and we’re six months into this and we’re not quite there yet,” he added.

Cooper then pointed out that the same problem of extremist-promoting recommendation engines exists with Twitter, describing how after she had viewed a tweet by a right wing newspaper columnist she had then been recommended the account of the leader of a UK far right hate group.

“This is the point at which there’s a tension between how much you use technology to find bad content or flag bad content and how much you use it to make the user experience different,” said McSweeney in response to this line of questioning.

“These are the balances and the risks and the decisions we have to take. Increasingly… we are looking at how do we label certain types of content that they are never recommended but the reality is that the vast majority of a user’s experience on Twitter is something that they control themselves. They control it through who they follow and what they search for.”

Noting that the problem affects all three platforms, Cooper then directly accused the companies of operating radicalizing algorithmic information hierarchies — “because your algorithms are doing that grooming and that radicalization”, while the companies in charge of the technology are not stopping it.

Milner said he disagreed with her assessment of what the technology is doing but agreed there’s a shared problem of “how do we address that person who may be going down a channel… leading to them to be radicalized”.

He also claimed Facebook sees “lots of examples of the opposite happening” and of people coming online and encountering “lots of positive and encouraging content”.

Lundblad also responded to flag up a YouTube counterspeech initiative — called Redirect, that’s currently only running in the UK — that aims to catch people who are searching for extremist messages and redirect them to other content debunking the radicalizing narratives.

“It’s first being used for anti-radicalization work and the idea now is to catch people who are in the funnel of vulnerability, break that and take them to counterspeech that will debunk the myths of the Caliphate for example,” he said.

Also responding to the accusation, McSweeney argued for “building strength in the audience as much as blocking those messages from coming”.

In a series of tweets after the committee session, Cooper expressed continued discontent at the companies’ performance tackling online hate speech.

“Still not doing enough on extremism & hate crime. Increase in staff & action since we last saw them in Feb is good but still too many serious examples where they haven’t acted,” she wrote.

“Disturbed that if you click on far right extremist @YouTube videos then @YouTube recommends many more — their technology encourages people to get sucked in, they are supporting radicalisation.

“Committee challenged them on whether same is happening for Jihadi extremism. This is all too dangerous to ignore.”

“Social media companies are some of the biggest & richest in the world, they have huge power & reach. They can and must do more,” she added.

None of the companies responded to a request to respond to Cooper’s criticism that they are still failing to do enough to tackle online hate crime.

Featured Image: Atomic Imagery/Getty Images