All posts in “United Kingdom”

UK watchdog wants disclosure rules for political ads on social media

The UK’s data protection agency will push for increased transparency into how personal data flows between digital platforms to ensure people being targeted for political advertising are able to understand why and how it is happening.

Information commissioner Elizabeth Deham said visibility into ad targeting systems is needed so that people can exercise their rights — such as withdrawing consent to their personal data being processed should they wish.

“Data protection is not a back-room, back-office issue anymore,” she said yesterday. “It is right at the centre of these debates about our democracy, the impact of social media on our lives and the need for these companies to step up and take their responsibilities seriously.”

“What I am going to suggest is that there needs to be transparency for the people who are receiving that message, so they can understand how their data was matched up and used to be the audience for the receipt of that message. That is where people are asking for more transparency,” she added.

The commissioner was giving her thoughts on how social media platforms should be regulated in an age of dis(and mis)information during an evidence session in front of a UK parliamentary committee that’s investigating fake news and the changing role of digital advertising.

Her office (the ICO) is preparing its own report this spring — which she said is likely to be published in May — which will lay out its recommendations for government.

“We want more people to participate in our democratic life and democratic institutions, and social media is an important part of that, but we also do not want social media to be a chill in what needs to be the commons, what needs to be available for public debate,” she said.

“We need information that is transparent, otherwise we will push people into little filter bubbles, where they have no idea about what other people are saying and what the other side of the campaign is saying. We want to make sure that social media is used well.

“It has changed dramatically since 2008. The Obama campaign was the first time that there was a lot of use of data analytics and social media in campaigning. It is a good thing, but it needs to be made more transparent, and we need to control and regulate how political campaigning is happening on social media, and the platforms need to do more.”

Last fall UK prime minister Theresa May publicly accused Russia of weaponizing online information in an attempt to skew democratic processes in the West.

And in January the government announced it would set up a dedicated national security unit to combat state-led disinformation campaigns.

Last month May also ordered a review of the law around social media platforms, as well as announcing a code of conduct aimed at cracking down on extremist and abusive content — another Internet policy she’s prioritized.

So regulating online content has already been accelerated to the top of government in the UK — as it is increasingly on the agenda in Europe.

Although it’s not yet clear how the UK government will seek to regulate social media platforms to control political advertising.

Denham’s suggestion to the committee was for a code of conduct.

“I think the use of social media in political campaigns, referendums, elections and so on may have got ahead of where the law is,” she argued. “I think it might be time for a code of conduct so that everybody is on a level playing field and knows what the rules are.

“I think there are some politicians, some MPs, who are concerned about the use of these new tools, particularly when there are analytics and algorithms that are determining how to micro-target someone, when they might not have transparency and the law behind them.”

She added that the ICO’s incoming policy report will conclude that “transparency is important”.

“People do not understand the chain of companies involved. If they are using an app that is running off the Facebook site and there are other third parties involved, they do not know how to control their data,” she argued.

“Right now, I think we all agree that it is much too difficult and much too opaque. That is what we need to tackle. This Committee needs to tackle it, we need to tackle it at the ICO, and the companies have to get behind us, or they are going to lose the trust of users and the digital economy.”

She also spoke up generally for more education on how digital systems work — so that users of services can “take up their rights”.

“They have to take up their rights. They have to push companies. Regulators have to be on their game. I think politicians have to support new changes to the law if that is what we need,” she added.

And she described the incoming General Data Protection Regulation (GDPR) as a “game-changer” — arguing it could underpin a push for increased transparency around the data flows that are feeding and shaping public opinions. Although she conceded that regulating such data flows to achieve the sought for accountability will require a fully joined up effort.

“I would like to be an optimist. The point behind the General Data Protection Regulation as a step-up in the law is to try to give back control to individuals so that they have a say in how their data are processed, so that they do not just throw up their hands or put it on the ‘too difficult’ pile. I think that is really important. There is a whole suite of things and a whole village that has to work together to be able to make that happen.”

The committee recently took evidence from Cambridge Analytica — the UK based company credited with helping Donald Trump win the US presidency by creating psychological profiles of US voters for ad targeting purposes.

Denham was asked for her response to seeing CEO Alexander Nix’s evidence. But said she could not comment to avoid prejudicing the ICO’s own ongoing investigation into data analytics for political purposes.

She did confirm that a data request by US voter and professor David Carroll, who has been trying to use UK data protection law to access the data held on him for political ad targeting purposes by Cambridge Analytica, is forming one of the areas of the ICO enquiry — saying it’s looking at “how an individual becomes the recipient of a certain message” and “what information is used to categorise him or her, whether psychographic technologies are used, how the categories are fixed and what kind of data has fed into that decision”.

Although she also said the ICO’s enquiry into political data analytics is ranging more widely.

“People need to know the provenance and the source of the data and information that is used to make decisions about the receipt of messages. We are really looking at — it is a data audit. That is really what we are carrying out,” she added.

Featured Image: Tero Vesalainen/Getty Images

Twitter accused of dodging Brexit botnet questions again

Once again Twitter stands accused of dodging questions from a parliamentary committee that’s investigating Russian bot activity during the UK’s 2016 Brexit referendum.

In a letter sent yesterday to Twitter CEO Jack Dorsey, DCMS committee chair Damian Collins writes: “I’m afraid there are outstanding questions… that Twitter have not yet answered, and some further ones that come from your most recent letter.”

In Twitter’s letter — sent last Friday — the company says it has now conducted an analysis of a dataset underpinning a City University study from last October (which had identified a ~13,500-strong botnet of fake Twitter accounts that had tweeted extensively about the Brexit referendum and vanished shortly after the vote).

And it says that 1% of these accounts were “registered in Russia”.

But Twitter’s letter doesn’t say very much else.

“While many of the accounts identified by City University were in violation of the Twitter Rules regarding spam, at this time, we do not have sufficiently strong evidence to enable us to conclusively link them with Russia or indeed the Internet Research Agency [a previously identified Russian trollfarm],” it writes.

Twitter goes on to state that 6,508 of the total accounts had already been suspended prior to the study’s publication (which we knew already, per the study itself) — and says that more than 99% of these suspensions “specifically related to the violation of our spam policies”.

So it’s saying that a major chunk of these accounts were engaged in spamming other Twitter users. And that — as a consequence — tweets from those accounts would not have been very visible because of its anti-spam measures.

“Of the remaining accounts, approximately 44.2% were deactivated permanently,” it continues, without exactly explaining why they were shuttered. “Of these, 1,093 accounts had been labelled as spam or low quality by Twitter prior to deletion, which would have resulted in their Tweets being hidden in Search for all users and not contributing to trending topics in any way.

“As we said in our previous letter, these defensive actions are not visible to researchers using our public APIs; however they are an important part of our proactive, technological approach to addressing these issues.”

Twitter’s letter writer, UK head of public policy Nick Pickles, adds that “a very small number of accounts identified by City University are still active on Twitter and are not currently in breach of our rules”.

He does not say how small.

tl;dr a small portion of this Brexit botnet is actually still live on

While Twitter’s letter runs to two pages, the second of which points to a December 2017 Brexit bot study by researchers at the Oxford Internet Institute, also relying on data from Twitter’s public streaming API, which Twitter says “found little evidence of links to Russian sources” — literally right after shitting on research conducted by “researchers using our public APIs” — Collins is clearly not wooed by either the quantity or the quality of the intelligence being so tardily provided.

Cutting to the chase, he asks Twitter to specify how many of the accounts “were being controlled from agencies in Russia, even if they were not registered there”.

He also wants to know: “How many of the accounts share the characteristics of the accounts that have already been identified as being linked to Russia, even if you are yet to establish conclusively that that link exists.”

And he points out that Twitter still hasn’t told the committee whether the 13,493 suspected bot accounts were “legitimate users or bots; who controlled these accounts, what the audience was for their activity during the referendum, and who deleted the tweets from these accounts”.

So many questions, still lacking robust answers.

“I’m afraid that the failure to obtain straight answers to these questions, whatever they might be, is simply increasing concerns about these issues, rather than reassuring people,” Collins adds.

We reached out to Twitter for a response to his letter but the company declined to provide a public statement.

Last week, after Collins had accused both Twitter and Facebook of essentially ignoring his requests for information, Facebook wrote to the committee saying it would take a more thorough look into its historic data around the event — though how comprehensive that follow up will be remains to be seen. (Facebook has also said the process will take “some weeks”, giving itself no firm deadline).

Both companies also disclosed some information last month, in response to a parallel Electoral Commission probe that’s looking at digital spending around the Brexit vote — but then they just revealed details of paid-for advertising by Russian entities that had targeted Brexit (saying this was: ~$1k and ~$1, respectively).

So they made no attempt to cast their net wider and look for Russian-backed non-paid content being freely created and spread on their platforms.

To date Collins has reserved his most withering criticisms for Twitter over this issue but he’s warned both they could face sanctions if they continued to stonewall his enquiry.

The DCMS committee is traveling to Washington next month for a public evidence session that Facebook and Twitter reps have been asked to attend.

It’s clearly hoping that proximity to Washington — and the recent memory of the companies’ grilling at the hands of US lawmakers over US election-related disinformation — might shame them into a more fulsome kind of co-operation.

Meanwhile, the UK’s Intelligence and Security Committee, which is able to take closed door evidence from domestic spy agencies, discussed the security threat from state actors in its annual report last year.

And although its report did not explicitly identify Brexit as having been a definitive target for Russian meddling, it did raise concerns around Russia’s invigorated cyber activities and warn that elections and referenda could be targets for disinformation attacks.

“State actors are highly capable of carrying out advanced cyber attacks; however, their use of these methods has historically been restricted by the diplomatic and geopolitical consequences that would follow should the activity be uncovered. Recent Russian cyber activity appears to indicate that this may no longer be the case,” the committee wrote, citing the hacking of the DNC and John Podesta’s emails as indications that Russia is adopting a “more brazen approach to its cyber activities”.

Evidence it took from the UK’s GCHQ and MI5 spy agencies is redacted in the report — including in a section discussing the security of the UK’s political system.

Here the committee writes that cyber attacks by hostile foreign states and terrorist groups could “potentially include planting fake information on legitimate political and current affairs websites, or otherwise interfering with the online presence of political parties and institutions”.

Another redacted section of evidence from GCHQ then details how the agency “is already alert to the risks surrounding the integrity of data”.

The ISC goes on to speculate that such state attacks could have a variety of motives, including:

  • generally undermining the integrity of the UK’s political processes, with a view to weakening the UK Government in the eyes of both the British population and the wider world;
  • subverting a specific election or referendum by undermining or supporting particular campaigns, with a countervailing benefit to the hostile actor’s preferred side;
  • poisoning public discourse about a sensitive political issue in a manner that suits the hostile state’s foreign policy aims; or
  • in the case of political parties’ sensitive data on the electorate, obtaining the political predilections and other characteristics of a large proportion of the UK population, thereby identifying people who might be open to subversion or political extremism in the hostile actor’s interests

“The combination of the high capability of state actors with an increasingly brazen approach places an ever greater importance on ensuring the security of systems in the UK which control the Critical National Infrastructure. Detecting and countering high-end cyber activity must remain a top priority for the government,” it adds.

In related news, this week the UK government announced plans to set up a dedicated national security unit to combat state-led disinformation campaigns.

Featured Image: NurPhoto/Getty Images

UK to set up security unit to combat state disinformation campaigns

The UK government has announced plans to set up a dedicated national security unit to combat state-led disinformation campaigns — raising questions about how broad its ‘fake news’ bullseye will be.

Last November UK prime minister Theresa May publicly accused Russia of seeking to meddle in elections by weaponizing information and spreading fake news online.

“The UK will do what is necessary to protect ourselves, and work with our allies to do likewise,” she said in her speech at the time.

The new unit is intended to tackle what the PM’s spokesperson described in comments yesterday as the “interconnected complex challenges” of “fake news and competing narratives”.

The decision to set it up was taken after a meeting this week of the National Security Council — a Cabinet committee tasked with overseeing issues related to national security, intelligence and defense.

“We will build on existing capabilities by creating a dedicated national security communications unit. This will be tasked with combating disinformation by state actors and others. It will more systematically deter our adversaries and help us deliver on national security priorities,” the prime minister’s spokesperson told reporters (via Reuters).

According to the PressGazette, the new unit will be named the National Security Communications Unit and will be based in the Cabinet Office.

“The government is committed to tackling false information and the Government Communications Service (GCS) plays a crucial role in this,” a Cabinet Office spokesperson told the publication. “Digital communications is constantly evolving and we are looking at ways to meet the challenging media landscape by harnessing the power of new technology for good.”

Monitoring social media platforms is expected to form a key part of the unit’s work as it seeks to deter adversaries by flagging up their fakes. But operational details are thin on the ground at this point. UK defense secretary, Gavin Williamson, is expected to give a statement to parliament later this week with more details about the unit.

Writing last week (in PR Week) about the challenges GCS faces this year, Alex Aiken, executive director of the service, named “build[ing] a rapid response social media capability to deal quickly with disinformation and reclaim[ing] a fact-based public debate with a new team to lead this work in the Cabinet Office” as the second item on his eight-strong list.

A key phrase there is “rapid response” — given the highly dynamic and bi-directional nature of some of the disinformation campaigns that have, to date, been revealed spreading via social media. Though a report in the Times suggests insiders are doubtful that Whitehall civil servants will have the capacity to respond rapidly enough to online disinformation.

Another key phrase in Aiken’s list is “fact-based” — because governments and power-wielding politicians denouncing ‘fake news’ is a situation replete with irony and littered with pitfalls. So a crucial factor regarding the unit will be how narrowly (or otherwise) its ‘fake news’ efforts are targeted.

If its work is largely focused on identifying and unmasking state-level disinformation campaigns — such as the Russian-backed bots which sought to interfere in the UK’s 2016 Brexit referendum — it’s hard to dispute that’s necessary and sensible.

Although there are still lots of follow-on considerations, including diplomatic ones — such as whether the government will expend resources to monitor all states for potential disinformation campaigns, even political allies.

And whether it will make public every disinformation effort it identifies, or only selectively disclose activity from certain states.

But the PM’s spokesperson’s use of the phrase ‘fake news’ risks implying the unit will have a rather broader intent, which is concerning — from a freedom of the press and freedom of speech perspective.

Certainly it’s a very broad concept to be deploying in this context, especially when government ministers stand accused of being less than honest in how they present information. (For one extant example, just Google the phrase: “brexit bus”.)

Indeed, even the UK PM herself has been accused domestically on that front.

So there’s a pretty clear risk of ‘fake news’ being interpreted by some as equating to any heavy political spin.

But presumably the government is not intending the new unit to police its own communications for falsities. (Though, if it’s going to ignore its own fakes, well it opens itself up to easy accusations of double standards — aka: ‘domestic political lies, good; foreign political lies bad’… )

Earlier this month the French president, Emmanuel Macron — who in recent months has also expressed public concern about Russian disinformation — announced plans to introduce an anti-fake news election law to place restrictions on social media during election periods.

And while that looks like a tighter angle to approach the problem of malicious and politically divisive disinformation campaigns, it’s also clear that a state like Russia has not stopped spreading fake news just because a particular target country’s election is over.

Indeed, the Kremlin has consistently demonstrated very long term thinking in its propaganda efforts, coupled with considerable staying power around its online activity — aimed at building plausibility for its disinformation cyber agents.

Sometimes these agents are seeded multiple years ahead of actively deploying them as ‘fake news’ conduits for a particular election or political event.

So just focusing on election ‘fake news’ risks being too narrow to effectively combat state-level disinformation, unless combined with other measures. Even as generally going after ‘fake news’ opens the UK government to criticism that it’s trying to shut down political debate and criticism.

Disinformation is clearly a very hard problem for governments to tackle, with no easy answers — even as the risks to democracy are clear enough for even Facebook to admit them.

Yet it’s also a problem that’s not being helped by the general intransigence and lack of transparency from the social media companies that control the infrastructure being used to spread disinformation.

These are also the only entities that have full access to the data that could be used to build patterns and help spot malicious bot-spreading agents of disinformation.

Last week, in the face of withering criticism from a UK committee that’s looking into the issue of fake news, Facebook committed to taking a deeper look into its own data around the Brexit referendum.

At this point it’s not clear whether Twitter — which has been firmly in the committee’s crosshairs — will also agree to conduct a thorough investigation of Brexit bot activity or not.

A spokeswomen for the committee told us it received a letter from Twitter on Friday and will be publishing that, along with its response, later this week. She declined to share any details ahead of that.

The committee is running an evidence session in the US, scheduled for February 8, when it will be putting questions to representatives from Facebook and Twitter, according to the spokeswoman. Its full report on the topic is not likely due for some months still, she added.

At the same time, the UK’s Electoral Commission has been investigating social media to consider whether campaign spending rules might have been broken at the time of the EU referendum vote — and whether to recommend the government drafts any new legislation. That effort is also ongoing.

Featured Image: Thomas Faull/Getty Images

Facebook expands ‘Community Boost’ digital skills training program to Europe

Facebook has announced it’s expanding a free training program that teaches Internet-skills, media literacy and online safety to Europe. It says its “ambition” is to train 300,000 people across six EU countries by 2020 — specifically in the UK, Germany, France, Spain, Italy and Poland.

It also says it will be opening “digital learning centers” in three of the countries — Spain, Italy, and Poland — as part of the program (though it’s not yet clear where exactly the three centers will be located).

The company says the training will generally be offered to “underrepresented groups”. It’s not entirely clear what that means but Facebook points to a Berlin school it set up last year, in partnership with the ReDI School of Digital Integration, which teaches classes ­such as coding and professional development to refugees, seniors citizens and young people, as a template for its thinking here.

We’ve asked for its definition of underrepresented groups; details of the application process; and its criteria for granting training places and will update this post with any additional details.

Facebook operates this digital skills training program under the brand name “Community Boost”.

While the training is offered to target recipients for free, Facebook’s business clearly stands to benefit if more people become digitally literate after being introduced to the Internet by Facebook — giving the company the chance to add more users and gain more overall eyeballs for its ad targeting platform, as indeed the name of the program implies.

Media literacy is also of increasing importance to a platform that is now bogged down with accusations that its business benefits by fencing fake news.

Back in November Facebook announced a touring Community Boost program would be doing the rounds of 30 US cities in 2018. And, as you might recall, the timing of that announcement came hard on the heels of revelations that Kremlin agents had used Facebook’s ad targeting platform to inject all sorts of divisive disinformation into Americans’ eyeballs during the 2016 presidential election in an attempt to disrupt the democratic process. Putting Zuck under pressure to have some positive domestic PR to offset all the negative headlines.

In Europe, Facebook has also been facing rising political scrutiny and displeasure. Last month, for example, the French president announced an anti-fake news election law will be incoming this year — aiming to tackle the spread of online fake news during election periods.

While in the UK parliamentarians who are running a wide-ranging investigation into fake news have expressed increasing frustration with Facebook (and Twitter) for footdragging in the face of asks they more thoroughly probe the extent of Russian involvement in the Brexit referendum vote.

So, in the EU too, Facebook is under pressure to offset a lot of bad PR. And funding some digital skills training is exactly the kind of feel-good initiative that’s positioned to play well politically — even while it also absolutely still aligns with Facebook’s core business goals of increasing online usage and thus overall eyeballs for targeting ads.

Digital skills is also a pretty woolly umbrella at this point. Facebook says — for example — that it could mean teaching coding to someone who already has “very strong skills” or helping someone else open an online bank account. It further specifies that the training offered will depend on the level of existing skills of the people being targeted — so it’s not necessarily going to be teaching much actual coding here.

To deliver the program, Facebook is partnering with digital upskilling firm Freeformers which will supply the training across the six EU countries.

“We will be using our Future Workforce Model to help individuals acquire the attributes to be employable, successful and productive in a digital world,” says founder and CEO Gi Fernando, in a statement. “These attributes will be aligned to the mindset, skillsets and behaviours industry needs in its future workforce.”

Facebook specifies there will be 50,000 training places going to target recipients in the UK (so it’s presumably going to be 50k places apiece in each of the six target EU Member States).

PRing the announcement at Davos today, Facebook COO Sheryl Sandberg trumpeted that the company is committing to train a total of one million people and businesses in Europe by 2020.

However she was rolling in figures from existing Facebook small business training programs, i.e. programs which are aimed at encouraging SMEs to adopt its advertising platform, to reach that politically expedient 1M figure. “These skills will help people thrive in today’s workplace and help small businesses grow and create jobs,” Sandberg added in a statement.

The company has also commissioned — and is today PRing — research in the EU markets it’s targeting which it claims show small businesses’ use of Facebook “translates into new jobs and opportunities for communities across the EU”. Though it has not released details of the methodology underpinning its findings. And, well, it would say that wouldn’t it?

It’s worth noting that not all the Community Boost training will take place in person. In the UK Facebook specifies that the program will reach 12,500 people through in-person training and 37,500 online. So if it’s replicating that split across all the countries then total in-person training places could make up just 75,000 out of the 300,000 total goal.

We’ve asked how much money Facebook is spending to fund the EU Community Boost program specifically and will update this post if it responds.

“Today’s announcements are part of our ongoing investments in digital training. Since 2011, we’ve invested more than $1 billion to support small businesses around the world. Our Boost Your Business program has trained hundreds of thousands of small businesses globally, and more than 1 million small businesses have used Facebook’s free online learning hub, Blueprint. More than 70 million small businesses use our free Pages tool to create an online presence,” Facebook adds in its blog announcement.

In a further move clearly aimed at trying to drum up positive local noise around its platform, Facebook says it will be launching a national ad campaign in the UK that will showcase small businesses that it says have used Facebook to help their businesses grow.

At the same time it is facing rather less positive sentiments from users in the UK, according to another piece of recent research…

Also today, Facebook announced a €10M investment in “accelerating AI innovation in France” — by increasing its AI research Paris’ PhD fellow places from 10 to 40.

It also says it’s funding 10 servers, as well as open datasets for French public institutions, and that it will double the team of researchers and engineers there from 30 to 60.

Companies investing in AI research are also of course investing in their own AI research departments, given the ongoing AI skills shortage and fierce industry competition for AI expertise. So that €10M for students in France is naturally positioned to help accelerate AI innovation at Facebook too.

Zooming out for a little more context, European Union lawmakers have been talking tough on tax reform lately and even entertaining ideas such as taxing digital ads. And ministers in certain Member States — including France — are very angry about tech giants’ habitual practice of profit shifting to lower tax economies (like Ireland, as Facebook has) as a strategy to minimize their overall EU tax liabilities.

Feeling the heat on that, in December Facebook said it would start to book its international advertising revenue in the countries where it is generated this year — i.e. rather than re-routing it though Ireland as it had been doing for all these years and benefiting from a lower corporate tax rate.

In the UK last year Facebook paid just £5.1M in corporation tax — despite its revenues in the market leaping up to £842.4M, for example.

And, well, just think of how many digital skills and AI upskilling programs EU governments could have invested in if Facebook had been paying taxes on its per country revenue instead of seeking to pay back the minimum possible.

Featured Image: Sean Gallup/Getty Images

UK eyeing ‘extremism’ tax on social media giants

The UK government has kicked off the new year with another warning shot across the bows of social media giants.

In an interview with the Sunday Times newspaper, security minister Ben Wallace hit out at tech platforms like Facebook and Google, dubbing such companies “ruthless profiteers” and saying they are doing too little to help the government combat online extremism and terrorism despite hateful messages spreading via their platforms.

“We should stop pretending that because they sit on beanbags in T-shirts they are not ruthless profiteers. They will ruthlessly sell our details to loans and soft-porn companies but not give it to our democratically elected government,” he said.

Wallace suggested the government is considering a tax on tech firms to cover the rising costs of policing related to online radicalization.

“If they continue to be less than co-operative, we should look at things like tax as a way of incentivizing them or compen­sating for their inaction,” he told the newspaper.

Although the minister did not name any specific firms, a reference to encryption suggests Facebook-owned WhatsApp is one of the platforms being called out (the UK’s Home Secretary has also previously directly attacked WhatsApp’s use of end-to-end encryption as an aid to criminals, as well as repeatedly attacking e2e encryption itself).

“Because of encryption and because of radicalization, the cost… is heaped on law enforcement agencies,” Wallace said. “I have to have more human surveil­lance. It’s costing hundreds of millions of pounds. If they continue to be less than co-operative, we should look at things like tax as a way of incentiviz­ing them or compen­sating for their inaction.

“Because content is not taken down as quickly as they could do, we’re having to de-radicalize people who have been radicalized. That’s costing millions. They can’t get away with that and we should look at all options, including tax,” he added.

Last year in Europe the German government agreed a new law targeting social media firms over hate speech takedowns. The so-called NetzDG law came into effect in October — with a three month transition period for compliance (which ended yesterday). It introduces a regime of fines of up to €50M for social media platforms that fail to remove illegal hate speech after a complaint (within 24 hours in straightforward cases; or within seven days where evaluation of content is more difficult).

UK parliamentarians investigating extremism and hate speech on social platforms via a committee enquiry also urged the government to impose fines for takedown failures last May, accusing tech giants of taking a laissez-faire approach to moderating hate speech.

Tackling online extremism has also been a major policy theme for UK prime minister Theresa May’s government, and one which has attracted wider backing from G7 nations — converging around a push to get social media firms to remove content much faster.

Responding now to Wallace’s comments in the Sunday Times, Facebook sent us the following statement, attributed to its EMEA public policy director, Simon Milner:

Mr Wallace is wrong to say that we put profit before safety, especially in the fight against terrorism. We’ve invested millions of pounds in people and technology to identify and remove terrorist content. The Home Secretary and her counterparts across Europe have welcomed our coordinated efforts which are having a significant impact. But this is an ongoing battle and we must continue to fight it together, indeed our CEO recently told our investors that in 2018 we will continue to put the safety of our community before profits.

In the face of rising political pressure to do more to combat online extremism, tech firms including Facebook, Google and Twitter set up a partnership last summer focused on reducing the accessibility of Internet services to terrorists.

This followed an announcement, in December 2016, of a shared industry hash database for collectively identifying terror accounts — with the newer Global Internet Forum to Counter Terrorism intended to create a more formal bureaucracy for improving the database.

But despite some public steps to co-ordinate counter-terrorism action, the UK’s Home Affairs committee expressed continued exasperation with Facebook, Google and Twitter for failing to effectively enforce their own hate speech rules in a more recent evidence session last month.

Though, in the course of the session, Facebook’s Milner, claimed it’s made progress on combating terrorist content, and said it will be doubling the number of people working on “safety and security” by the end of 2018 — to circa 20,000.

In response to a request for comment on Wallace’s remarks, a YouTube spokesperson emailed us the following statement:

Violent extremism is a complex problem and addressing it is a critical challenge for us all. We are committed to being part of the solution and we are doing more every day to tackle these issues. Over the course of 2017 we have made significant progress through investing in machine learning technology, recruiting more reviewers, building partnerships with experts and collaboration with other companies through the Global Internet Forum.

In a major shift last November YouTube broadened its policy for taking down extremist content — to remove not just videos that directly preach hate or seek to incite violence but also take down other videos of named terrorists (with exceptions for journalistic or educational content).

The move followed an advertiser backlash after marketing messages were shown being displayed on YouTube alongside extremist and offensive content.

Answering UK parliamentarians’ questions about how YouTube’s recommendation algorithms are actively pushing users to consume increasingly extreme content — in a sort of algorithmic radicalization — Nicklas Berild Lundblad, EMEA VP for public policy, admitted there can be a problem but said the platform is working on applying machine learning technology to automatically limit certain videos so they would not be algorithmically surfaceable (and thus limit their ability to spread).

Twitter also moved to broaden its hate speech policies last year — responding to user criticism over the continued presence of hate speech purveyors on its platform despite having community guidelines that apparently forbid such conduct.

A Twitter spokesman declined to comment on Wallace’s remarks.

Speaking to the UK’s Home Affairs committee last month, the company’s EMEA VP for public policy and communications, Sinead McSweeney, conceded that it has not been “good enough” at enforcing its own rules around hate speech, adding: “We are now taking actions against 10 times more accounts than we did in the past.”

But regarding terrorist content specifically, Twitter reported a big decline in the proportion of pro-terrorism accounts being reported on its platform as of September, along with apparent improvements in its anti-terrorism tools — claiming 95 per cent of terrorist account suspensions had been picked up by its systems (vs manual user reports).

It also said 75 per cent of these accounts were suspended before they’d sent their first tweet.

Featured Image: Erik Tham/Getty Images