All posts in “jeremy wright”

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office.

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse which will be covered by further stringent requirements under the plan.

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although it reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, after which it says it will set out the action it will take in developing its final proposals for legislation.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident at this stage that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own.

The House of Lords committee was another that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”. And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle.

But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

YouTube tightens restrictions on channel of UK far right activist — but no ban

YouTube has placed new restrictions on the channel of a UK far right activist which are intended to make hate speech less easy to discover on its platform.

Restrictions on Stephen Yaxley-Lennon’s YouTube channel include removing certain of his videos from recommendations. YouTube is also taking away his ability to livestream to his now close to 390,000 YouTube channel subscribers.

Yaxley-Lennon, who goes by the name ‘Tommy Robinson’ on social media, was banned from Twitter a year ago.

Buzzfeed first reported the new restrictions. A YouTube spokesperson confirmed the shift in policy, telling us: “After consulting with third party experts, we are applying a tougher treatment to Tommy Robinson’s channel in keeping with our policies on borderline content. The content will be placed behind an interstitial, removed from recommendations, and stripped of key features including livestreaming, comments, suggested videos, and likes.”

Test searches for ‘Tommy Robinson’ on YouTube now return a series of news reports — instead of Yaxley-Lennon’s own channel, as was the case just last month.

YouTube had already demonetized Yaxley-Lennon’s channel back in January for violating its ad policies.

But as we reported last month Google has been under increasing political pressure in the UK to tighten its policies over the far right activist.

The policy shift applies to videos uploaded by Yaxley-Lennon that aren’t illegal or otherwise in breach of YouTube’s community standards (as the company applies them) but which have nonetheless been flagged by users as potential violations of the platform’s policies on hate speech and violent extremism.

In such instances YouTube says it will review the videos and those not in violation of its policies but which nonetheless contain controversial religious or extremist content will be placed behind an interstitial, removed from recommendations, and stripped of key features including comments, suggested videos, and likes.

Such videos will also not be eligible for monetization.

The company says its goal with the stricter approach to Yaxley-Lennon’s content is to strike a balance between upholding free expression and a point of public and historic record, while also keeping hateful content from being spread or recommended to others.

YouTube said it carefully considered Yaxley-Lennon’s case — consulting with external experts and UK academics — before deciding it needed to take tougher treatment.

Affected videos will still remain on YouTube — albeit behind an interstitial. They also won’t be recommended, and will be stripped of the usual social features including comments, suggested videos, and likes.

Of course it remains to be seen how tightly YouTube will apply the new more restrictive policy in this case. And whether Yaxley-Lennon himself will adapt his video strategy to workaround tighter rules on that channel.

The far right is very well versed in using coded language and dog whistle tactics to communicate with its followers and spread racist messages under the mainstream radar.

Yaxley-Lennon has had a presence on multiple social media channels, adapting the content to the different platforms. Though YouTube is the last mainstream channel still available to him after Facebook kicked him off its platform in February. Albeit, he was quickly able to workaround Facebook’s ban simply by using a friend’s Facebook account to livestream himself harassing a journalist at his home late at night.

Police were called out twice in that instance. And in a vlog uploaded to YouTube after the incident Yaxley-Lennon threatened other journalists to “expect a knock at the door”.

Shortly afterwards the deputy leader of the official opposition raised his use of YouTube to livestream harassment in parliament, telling MPs then that: “Every major social media platform other than YouTube has taken down Stephen Yaxley-Lennon’s profile because of his hateful conduct.”

The secretary of state for digital, Jeremy Wright, responded by urging YouTube to “reconsider their judgement” — saying: “We all believe in freedom of speech. But we all believe too that that freedom of speech has limits. And we believe that those who seek to intimidate others, to potentially of course break the law… that is unacceptable. That is beyond the reach of the type of freedom of speech that we believe should be protected.”

YouTube claims it removes videos that violate its hate speech and violent content policies. But in previous instances involving Yaxley-Lennon it has told us that specific videos of his — including the livestreamed harassment that was raised in parliament — do not constitute a breach of its standards.

It’s now essentially admitting that those standards are too weak in instances of weaponized hate.

Yaxley-Lennon, a former member of the neo-nazi British National Party and one of the founders of the far right, Islamophobic English Defence League, has used social media to amplify his message of hate while also soliciting donations to fund individual far right ‘activism’ — under the ‘Tommy Robinson’ moniker.

The new YouTube restrictions could reduce his ability to leverage the breadth of Google’s social platform to reach a wider and more mainstream audience than he otherwise would.

Albeit, it remains trivially easy for anyone who already knows the ‘Tommy Robinson’ ‘brand’ to workaround the YouTube restrictions by using another mainstream Google-owned technology. A simple Google search for “Tommy Robinson YouTube channel” returns direct links to his channel and content at the top of search results. 

Yaxley-Lennon’s followers will also continue to be able to find and share his YouTube content by sharing direct links to it — including on mainstream social platforms.

Though the livestream ban is a significant restriction — if it’s universally applied to the channel — which will make it harder for Yaxley-Lennon to communicate instantly at a distance with followers in his emotive vlogging medium of choice.

He has used the livestreaming medium skilfully to amplify and whip up hate while presenting himself to his followers as a family man afraid for his wife and children. (For the record: Yaxley-Lennon’s criminal record includes convictions for violence, public order offences, drug possession, financial and immigration frauds, among other convictions.)

If Google is hoping to please everyone by applying a ‘third route’ of tighter restrictions for a hate speech weaponizer yet no total ban it will likely just end up pleasing no one and taking flak from both sides.

The company does point out it removes channels of proscribed groups and any individuals formally linked to such groups. And in this case the related far right groups have not been proscribed by the UK government. So the UK government could certainly do much more to check the rise of domestic far right hate.

But YouTube could also step up and take a leadership position by setting robust policies against individuals who seek to weaponize hate.

Instead it continues to fiddle around the edges — trying to fudge the issue by claiming it’s about ‘balancing’ speech and community safety.

In truth hate speech suppresses the speech of those it targets with harassment. So if social networks really want to maximize free speech across their communities they have to be prepared to weed out bad actors who would shrink the speech of minorities by weaponizing hate against them.

YouTube under pressure to ban UK Far Right activist after livestreamed intimidation

The continued presence of a UK Far Right activist on YouTube’s platform has been raised by the deputy leader of the official opposition during ministerial questions in the House of Commons today.

Labour’s Tom Watson put questions to the secretary of state for digital, Jeremy Wright, regarding Stephen Yaxley-Lennon’s use of social media for targeted harassment of journalists.

This follows an incident on Monday night when Yaxley-Lennon used social media tools to livestream himself banging on the doors and windows of a journalist’s home in the middle of the night.

“Every major social media platform other than YouTube has taken down Stephen Yaxley-Lennon’s profile because of his hateful conduct,” said Watson, before recounting how the co-founder of the Far Right English Defence League — who goes by the made-up name ‘Tommy Robinson’ on social media — used social media livestreaming tools to harass journalist Mike Stuchbery on Monday night.

Stuchbery has since written about the incident for the Independent newspaper.

As we reported on Monday, Facebook removed the livestream for violating its policies after it was reported but not before Stuchbery had received a flood of abusive messages from other Facebook users who were watching the stream online.

Yaxley-Lennon appears to have been able to circumvent Facebook’s ban on his own account to livestream his intimidation of Stuchbery via Facebook Live by using another Facebook account with a fake name (which the company appears to have since suspended).

Following the incident Stuchbery has reported receiving physical hate mail to his home address, which Yaxley-Lennon gave out during the livestream (an intimidation tactic that’s known as doxxing). He has also said he’s received further abuse online.

“Does the secretary of state think that it is right that YouTube, and the parent company Alphabet, continues to give this man a platform?” asked Watson, after highlighting another vlog Yaxley-Lennon has since uploaded to YouTube in which he warns other journalists “to expect a knock at the door”.

Wright responded by saying that “all Internet companies, all platforms for this kind of speech need to take their responsibilities seriously”.

“I hope that YouTube will consider this very carefully,” he told the House of Commons. “Consider what [Yaxley-Lennon] has said. What I have said, and reconsider their judgement.”

“We all believe in freedom of speech. But we all believe too that that freedom of speech has limits,” Wright added. “And we believe that those who seek to intimidate others, to potentially of course break the law… that is unacceptable. That is beyond the reach of the type of freedom of speech that we believe should be protected.”

We’ve reached out to YouTube for comment.

Stephen Yaxley-Lennon was banned by Facebook last month for repeat violations of its policies on hate speech. While Twitter banned Yaxley-Lennon a full year ago.

But he remains active on YouTube — where his channel has more than 350,000 subscribers.

The company has resisted calls to shutter his account, claiming the content Yaxley-Lennon posts to its platform is different to content he has posted elsewhere and thus that he has not broken any of its rules. (Though YouTube did demonetize videos on his channel in January saying they violated its ad policies.)

In a follow up question, Watson raised the issue of online harassment more widely — asking whether the government would be including measures “to prevent hate figures, extremists and their followers from turning the online world into a cesspit of hate” in its forthcoming White Paper on social media and safety, which it’s due to publish this winter — and thereby tackle a culture of hate and harassment online that he said is undermining democracy.

Wright said he would “consider” Watson’s suggestion though he stress the government must protect the ability for people to carry out robust debate online — and “to discuss issues that are sometimes uncomfortable and certainly controversial”.

But he went on to reiterate his earlier point that “no freedom of speech can survive in this country if we do not protect… people’s ability to feel free to say what they think, free of intimidation, free of the threat of violence”.

“Those who engage in intimidation or threats of violence should not find succour either online or anywhere else,” the minister added.

YouTube’s own community guidelines prohibit “harassment and cyberbullying”. So its continued silence on Yaxley-Lennon’s misuse of its tools does look inconsistent. (YouTube previously banned the InfoWars conspiracy theorist Alex Jones for violating its policies, for example, and there’s more than a passing resemblance between the two ‘hate preachers’).

Moreover, as Watson noted in parliament, Yaxley-Lennon’s most recent video contains a direct threat to doorstep and doxx journalists who covered his harassment of Stuchbery. The video also contains verbal abuse targeted at Stuchbery.

In one of the livestreams recorded outside Stuchbery’s home Yaxley-Lennon can also be heard making allegations about Stuchbery’s sexual interests that the journalist has described as defamatory.

YouTube previously declined to make a statement about Yaxley-Lennon’s continued presence on its platform. It has not responded to our repeat requests for follow up comment about the issue since Monday.

We’ll update this post if it does provide a statement following the government’s call to rethink its position on giving Yaxley-Lennon a platform.

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.

Fake news ‘threat to democracy’ report gets back-burner response from UK gov’t

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online.

The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following a multi-month investigation into the impact of so-called ‘fake news’ on democratic processes.

Though it has suggested the terms ‘misinformation’ and ‘disinformation’ be used instead, to better pin down exact types of problematic inauthentic content — and on that at least the government agrees. But just not on very much else. At least not yet.

Among around 50 policy suggestions in the interim report — which the committee put out quickly exactly to call for “urgent action” to ‘defend democracy’ — it urged the government to put forward proposals for an education levy on social media.

But in its response, released by the committee today, the government writes that it is “continuing to build the evidence base on a social media levy to inform our approach in this area”.

“We are aware that companies and charities are undertaking a wide range of work to tackle online harms and would want to ensure we do not negatively impact existing work,” it adds, suggesting it’s most keen not to be accused of making a tricky problem worse.

Earlier this year the government did announce plans to set up a dedicated national security unit to combat state-led disinformation campaigns, with the unit expected to monitor social media platforms to support faster debunking of online fakes — by being able to react more quickly to co-ordinated interference efforts by foreign states.

But going a step further and requiring social media platforms themselves to pay a levy to fund domestic education programs — to arm citizens with critical thinking capabilities so people can more intelligently parse content being algorithmically pushed at them — is not, apparently, forming part of government’s current thinking.

Though it is not taking the idea of some form of future social media tax off the table entirely, as it continues seeking ways to make big tech pay a fairer share of earnings into the public purse, also noting in its response: “We will be considering any levy in the context of existing work being led by HM Treasury in relation to corporate tax and the digital economy.”

As a whole, the government’s response to the DCMS committee’s laundry list of policy recommendations around the democratic risks of online disinformation can be summed up in a word as ‘cautious’ — with only three of the report’s forty-two recommendations being accepted outright, as the committee tells it, and four fully rejected.

Most of the rest are being filed under ‘come back later — we’re still looking into it’.

So if you take the view that ‘fake news’ online has already had a tangible and worrying impact on democratic debate the government’s response will come across as underwhelming and lacking in critical urgency. (Though it’s hardly alone on that front.)

The committee has reacted with disappointment — with chair Damian Collins dubbing the government response “disappointing and a missed opportunity”, and also accusing ministers of hiding behind ‘ongoing investigations’ to avoid commenting on the committee’s call that the UK’s National Crime Agency urgently carry out its own investigation into “allegations involving a number of companies”.

Earlier this month Collins also called for the Met Police to explain why they had not opened an investigation into Brexit-related campaign spending breaches.

It has also this month emerged that the force will not examine claims of Russian meddling in the referendum.

Meanwhile the political circus and business uncertainty triggered by the Brexit vote goes on.

Holding pattern

The bulk of the government’s response to the DCMS interim report entails flagging a number of existing and/or ongoing consultations and reviews — such as the ‘Protecting the Debate: Intimidating, Influence and Information‘ consultation, which it launched this summer.

But by saying it’s continuing to gather evidence on a number of fronts the government is also saying it does not feel it’s necessary to rush through any regulatory responses to technology-accelerated, socially divisive/politically sensitive viral nonsense — claiming also that it hasn’t seen any evidence that malicious misinformation has been able to skew genuine democratic debate on the domestic front.

It’ll be music to Facebook’s ears given the awkward scrutiny the company has faced from lawmakers at home and, indeed, elsewhere in Europe — in the wake of a major data misuse scandal with a deeply political angle.

The government also points multiple times to a forthcoming oversight body which is in the process of being established — aka the Centre for Data Ethics and Innovation — saying it expects this to grapple with a number of the issues of concern raised by the committee, such as ad transparency and targeting; and to work towards agreeing best practices in areas such as “targeting, fairness, transparency and liability around the use of algorithms and data-driven technologies”.

Identifying “potential new regulations” is another stated role for the future body. Though given it’s not yet actively grappling with any of these issues the UK’s democratically concerned citizens are simply being told to wait.

“The government recognises that as technological advancements are made, and the use of data and AI becomes more complex, our existing governance frameworks may need to be strengthened and updated. That is why we are setting up the Centre,” the government writes, still apparently questioning whether legislative updates are needed — this in a response to the committee’s call, informed by its close questioning of tech firms and data experts, for an oversight body to be able to audit “non-financial” aspects of technology companies (including security mechanism and algorithms) to “ensure they are operating responsibly”.

“As set out in the recent consultation on the Centre, we expect it to look closely at issues around the use of algorithms, such as fairness, transparency, and targeting,” the government continues, noting that details of the body’s initial work program will be published in the fall — when it says it will also put out its response to the aforementioned consultation.

It does not specify when the ethics body will be in any kind of position to hit this shifty ground running. So again there’s zero sense the government intends to act at a pace commensurate with the fast-changing technologies in question.

Then, where the committee’s recommendations touch on the work of existing UK oversight bodies, such as Competition and Markets Authority, the ICO data watchdog, the Electoral Commission and the National Crime Agency, the government dodges specific concerns by suggesting it’s not appropriate for it to comment “on independent bodies or ongoing investigations”.

Also notable: It continues to reject entirely the idea that Russian-backed disinformation campaigns have had any impact on domestic democratic processes at all — despite public remarks by prime minister Theresa May  last year generally attacking Putin for weaponizing disinformation for election interference purposes.

Instead it writes:

We want to reiterate, however, that the Government has not seen evidence of successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. But we are not being complacent and the Government is actively engaging with partners to develop robust policies to tackle this issue.

Its response on this point also makes no reference of the extensive use of social media platforms to run political ads targeting the 2016 Brexit referendum.

Nor does it make any note of the historic lack of transparency of such ad platforms. Which means that it’s simply not possible to determine where all the ad money came from to fund digital campaigning on domestic issues — with Facebook only just launching a public repository of who is paying for political ads and badging them as such in the UK, for example.

The elephant in the room is of course that ‘lack of evidence’ is not necessarily evidence of a lack of success, especially when it’s so hard to extract data from opaque adtech platforms in the first place.

Moreover, just this week fresh concerns have been raised about how platforms like Facebook are still enabling dark ads to target political messages at citizens — without it being transparently clear who is actually behind and paying for such campaigns…

In turn triggering calls from opposition MPs for updates to UK election law…

Yet the government, busily embroiled as it still is with trying to deliver some kind of Brexit outcome, is seemingly unconcerned by all this unregulated, background ongoing political advertising.

It also directly brushes off the committee’s call for it to state how many investigations are currently being carried out into Russian interference in UK politics, saying only that it has taken steps to ensure there is a “coordinated structure across all relevant UK authorities to defend against hostile foreign interference in British politics, whether from Russia or any other State”, before reiterating: “There has, however, been no evidence to date of any successful foreign interference.”

This summer the Electoral Commission found that the official Vote Leave campaign in the UK’s in/out EU referendum had broken campaign spending rules — with social media platforms being repurposed as the unregulated playing field where election law could be diddled at such scale. That much is clear.

The DCMS committee had backed the Commission’s call for digital imprint requirements for electronic campaigns to level the playing field between digital and print ads.

However the government has failed to back even that pretty uncontroversial call, merely pointing again to a public consultation (which ends today) on proposed changes to electoral law. So it’s yet more wait and see.

The committee is also disappointed about the lack of government response to its call for the Commission to establish a code for advertising through social media during election periods; and its recommendation that “Facebook and other platforms take responsibility for the way their platforms are used” — noting also the government made “no response to Facebook’s failure to respond adequately to the Committee’s inquiry and Mark Zuckerberg’s reluctance to appear as a witness“. (A reluctance that really enraged the committee.)

In a statement on the government’s response, committee chair Damian Collins writes: “The government’s response to our interim report on disinformation and ‘fake news’ is disappointing and a missed opportunity. It uses other ongoing investigations to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

“We need to see a more coordinated approach across government to combat campaigns of disinformation being organised by Russian agencies seeking to disrupt and undermine our democracy. The government’s response gives us no real indication of what action is being taken on this important issue.”

Collins finds one slender crumb of comfort, though, that the government might have some appetite to rule big tech.

After the committee had called for government to “demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity”, the government writes that it: “has made it clear to Facebook, and other social media companies, that they must do more to remove illegal and harmful content”; and noting also that its forthcoming Online Harms White Paper will include “a range of policies to tackle harmful content”.

“We welcome though the strong words from the Government in its demand for action by Facebook to tackle the hate speech that has contributed to the ethnic cleansing of the Rohingya in Burma,” notes Collins, adding: “We will be looking for the government to make progress on these and other areas in response to our final report which will be published in December.

“We will also be raising these issues with the Secretary of State for DCMS, Jeremy Wright, when he gives evidence to the Committee on Wednesday this week.”

(Wright being the new minister in charge of the UK’s digital brief, after Matt Hancock moved over to health.)

We’ve reached out to Facebook for comment on the government’s call for a more robust approach to illegal hate speech.

Last week the company announced it had hired former UK deputy prime minister, Nick Clegg, to be its new head of global policy and comms — apparently signalling a willingness to pay a bit more attention to European regulators.