All posts in “jeremy wright”

UK to toughen telecoms security controls to shrink 5G risks

Amid ongoing concerns about security risks posed by the involvement of Chinese tech giant Huawei in 5G supply, the UK government has published a review of the telecoms supply chain which concludes that policy and regulation in enforcing network security needs to be significantly strengthened to address concerns.

However it continues to hold off on setting an official position on whether to allow or ban Huawei from supplying the country’s next-gen networks — as the US has been pressurizing its allies to do.

Giving a statement in parliament this afternoon, the UK’s digital minister, Jeremy Wright, said the government is releasing the conclusions of the report ahead of a decision on Huawei so that domestic carriers can prepare for the tougher standards it plans to bring in to apply to all their vendors.

“The Review has concluded that the current level of protections put in place by industry are unlikely to be adequate to address the identified security risks and deliver the desired security outcomes,” he said. “So, to improve cyber security risk management, policy and enforcement, the Review recommends the establishment of a new security framework for the UK telecoms sector. This will be a much stronger, security based regime than at present.

“The foundation for the framework will be a new set of Telecoms Security Requirements for telecoms operators, overseen by Ofcom and government. These new requirements will be underpinned by a robust legislative framework.”

Wright said the government plans to legislate “at the earliest opportunity” — to provide the regulator with stronger powers to to enforcement the incoming Telecoms Security Requirements, and to establish “stronger national security backstop powers for government”.

The review suggests the government is considering introducing GDPR-level penalties for carriers that fail to meet the strict security standards it will also be bringing in.

“Until the new legislation is put in place, government and Ofcom will work with all telecoms operators to secure adherence to the new requirements on a voluntary basis,” Wright told parliament today. “Operators will be required to subject vendors to rigorous oversight through procurement and contract management. This will involve operators requiring all their vendors to adhere to the new Telecoms Security Requirements.

“They will also be required to work closely with vendors, supported by government, to ensure effective assurance testing for equipment, systems and software, and to support ongoing verification arrangements.”

The review also calls for competition and diversity within the supply chain — which Wright said will be needed “if we are to drive innovation and reduce the risk of dependency on individual suppliers”.

The government will therefore pursue “a targeted diversification strategy, supporting the growth of new players in the parts of the network that pose security and resilience risks”, he added.

“We will promote policies that support new entrants and the growth of smaller firms,” he also said, sounding a call for security startups to turn their attention to 5G.

Government would “seek to attract trusted and established firms to the UK market”, he added — dubbing a “vibrant and diverse telecoms market” as both good for consumers and for national security.

“The Review I commissioned was not designed to deal only with one specific company and its conclusions have much wider application. And the need for them is urgent. The first 5G consumer services are launching this year,” he said. “The equally vital diversification of the supply chain will take time. We should get on with it.”

Last week two UK parliamentary committees espoused a view that there’s no technical reason to ban Huawei from all 5G supply — while recognizing there may be other considerations, such as geopolitics and human rights, which impact the decision.

The Intelligence and Security committee also warned that what it dubbed the “unnecessarily protracted” delay in the government taking a decision about 5G suppliers is damaging UK relations abroad.

Despite being urged to get a move on on the specific issue of Huawei, it’s notable that the government continues to hold off. Albeit, a new prime minister will be appointed later this week, after votes of Conservative Party members are counted — which may be contributing to ongoing delay.

“Since the US government’s announcement [on May 16, adding Huawei and 68 affiliates to its Entity List on national security grounds] we have sought clarity on the extent and implications but the position is not yet entirely clear. Until it is, we have concluded it would be wrong to make specific decisions in relation to Huawei,” Wright said, adding: “We will do so as soon as possible.”

In a press release accompanying the telecoms supply chain review the government said decisions would be taken about high risk vendors “in due course”.

Earlier this year a leak from a meeting of the UK’s National Security Council suggested the government was preparing to give an amber light to Huawei to continue supplying 5G — though limiting its participation to non-core portions of networks.

The Science & Technology committee also recommended the government mandate the exclusion of Huawei from the core of 5G networks.

Wright’s statement appears to hint that that position remains the preferred one — baring a radical change of policy under a new PM — with, in addition to talk of encouraging diversity in the supply chain, the minister also flagging the review’s conclusion that there should be “additional controls on the presence in the supply chain of certain types of vendor which pose significantly greater security and resilience risks to UK telecoms”.

Additional controls doesn’t sound like a euphemism for an out-and-out ban.

In a statement responding to the review, Huawei expressed confidence that it’s days of supplying UK 5G are not drawing to a close — writing:

The UK Government’s Supply Chain Review gives us confidence that we can continue to work with network operators to rollout 5G across the UK. The findings are an important step forward for 5G and full fibre broadband networks in the UK and we welcome the Government’s commitment to “a diverse telecoms supply chain” and “new legislation to enforce stronger security requirements in the telecoms sector”. After 18 years of operating in the UK, we remain committed to supporting BT, EE, Vodafone and other partners build secure, reliable networks.”

The evidence shows excluding Huawei would cost the UK economy £7 billion and result in more expensive 5G networks, raising prices for anyone with a mobile device. On Friday, Parliament’s Intelligence & Security Committee said limiting the market to just two telecoms suppliers would reduce competition, resulting in less resilience and lower security standards. They also confirmed that Huawei’s inclusion in British networks would not affect the channels used for intelligence sharing.

A spokesman for the company told us it already supplies non-core elements of UK carriers’ EE and Vodafone’s network, adding that it’s viewing Wright’s statement as an endorsement of that status quo.

While the official position remains to be confirmed all the signals suggest the UK’s 5G security strategy will be tied to tightened regulation and oversight, rather than follow a US path of seeking to shut Chinese tech giants out.

Commenting on the government’s telecoms supply chain review in a statement, Ciaran Martin, CEO of the UK’s National Cyber Security Centre, said: “As the UK’s lead technical authority, we have worked closely with DCMS [the Department for Digital, Culture, Media and Sport] on this review, providing comprehensive analysis and cyber security advice. These new measures represent a tougher security regime for our telecoms infrastructure, and will lead to higher standards, much greater resilience and incentives for the sector to take cyber security seriously.

“This is a significant overhaul of how we do telecoms security, helping to keep the UK the safest place to live and work online by ensuring that cyber security is embedded into future networks from inception.”

Although tougher security standards for telecoms combined with updated regulations that bake in major fines for failure suggest Huawei will have its work cut out not to be excluded by the market, as carriers will be careful about vendors as they work to shrink their risk.

Earlier this year a report by an oversight body that evaluates its approach to security was withering — finding “serious and systematic defects” in its software engineering and cyber security competence.

No technical reason to exclude Huawei as 5G supplier, says UK committee

A UK parliamentary committee has concluded there are no technical grounds for excluding Chinese network kit vendor Huawei from the country’s 5G networks.

In a letter from the chair of the Science & Technology Committee to the UK’s digital minister Jeremy Wright, the committee says: “We have found no evidence from our work to suggest that the complete exclusion of Huawei from the UK’s telecommunications networks would, from a technical point of view, constitute a proportionate response to the potential security threat posed by foreign suppliers.”

Though the committee does go on to recommend the government mandate the exclusion of Huawei from the core of 5G networks, noting that UK mobile network operators have “mostly” done so already — but on a voluntary basis.

If it places a formal requirement on operators not to use Huawei for core supply the committee urges the government to provide “clear criteria” for the exclusion so that it could be applied to other suppliers in future.

Reached for a response to the recommendations, a government spokesperson told us: “The security and resilience of the UK’s telecoms networks is of paramount importance. We have robust procedures in place to manage risks to national security and are committed to the highest possible security standards.”

The spokesperson for the Department for Digital, Media, Culture and Sport added: “The Telecoms Supply Chain Review will be announced in due course. We have been clear throughout the process that all network operators will need to comply with the Government’s decision.”

In recent years the US administration has been putting pressure on allies around the world to entirely exclude Huawei from 5G networks — claiming the Chinese company poses a national security risk.

Australia announced it was banning Huawei and another Chinese vendor ZTE from providing kit for its 5G networks last year. Though in Europe there has not been a rush to follow the US lead and slam the door on Chinese tech giants.

In April leaked information from a UK Cabinet meeting suggested the government had settled on a policy of granting Huawei access as a supplier for some non-core parts of domestic 5G networks, while requiring they be excluded from supplying components for use in network cores.

On this somewhat fuzzy issue of delineating core vs non-core elements of 5G networks, the committee writes that it “heard unanimously and clearly” from witnesses that there will still be a distinction between the two in the next-gen networks.

It also cites testimony by the technical director of the UK’s National Cyber Security Centre (NCSC), Dr Ian Levy, who told it “geography matters in 5G”, and pointed out Australia and the UK have very different “laydowns” — meaning “we may have exactly the same technical understanding, but come to very different conclusions”.

In a response statement to the committee’s letter, Huawei SVP Victor Zhang welcomed the committee’s “key conclusion” before going on to take a thinly veiled swiped at the US — writing: “We are reassured that the UK, unlike others, is taking an evidence based approach to network security. Huawei complies with the laws and regulations in all the markets where we operate.”

The committee’s assessment is not all comfortable reading for Huawei, though, with the letter also flagging the damning conclusions of the most recent Huawei Oversight Board report which found “serious and systematic defects” in its software engineering and cyber security competence — and urging the government to monitor Huawei’s response to the raised security concerns, and to “be prepared to act to restrict the use of Huawei equipment if progress is unsatisfactory”.

Huawei has previously pledged to spend $2BN addressing security shortcomings related to its UK business — a figure it was forced to qualify as an “initial budget” after that same Oversight Board report.

“It is clear that Huawei must improve the standard of its cybersecurity,” the committee warns.

It also suggests the government consults on whether telecoms regulator Ofcom needs stronger powers to be able to force network suppliers to clean up their security act, writing that: “While it is reassuring to hear that network operators share this point of view and are ready to use commercial pressure to encourage this, there is currently limited regulatory power to enforce this.”

Another committee recommendation is for the NCSC to be consulted on whether similar security evaluation mechanisms should be established for other 5G vendors — such as Ericsson and Nokia: Two European based kit vendors which, unlike Huawei, are expected to be supplying core 5G.

“It is worth noting that an assurance system comparable to the Huawei Cyber Security Evaluation Centre does not exist for other vendors. The shortcomings in Huawei’s cyber security reported by the Centre cannot therefore be directly compared to the cyber security of other vendors,” it notes.

On the issue of 5G security generally the committee dubs this “critical”, adding that “all steps must be taken to ensure that the risks are as low as reasonably possible”.

Where “essential services” that make use of 5G networks are concerned, the committee says witnesses were clear such services must be able to continue to operate safely even if the network connection is disrupted. Government must ensure measures are put in place to safeguard operation in the event of cyber attacks, floods, power cuts and other comparable events, it adds. 

While the committee concludes there is no technical reason to limit Huawei’s access to UK 5G, the letter does make a point of highlighting other considerations, most notably human rights abuses, emphasizing its conclusion does not factor them in at all — and pointing out: “There may well be geopolitical or ethical grounds… to enact a ban on Huawei’s equipment”.

It adds that Huawei’s global cyber security and privacy officer, John Suffolk, confirmed that a third party had supplied Huawei services to Xinjiang’s Public Security Bureau, despite Huawei forbidding its own employees from misusing IT and comms tech to carry out surveillance of users.

The committee suggests Huawei technology may therefore be being used to “permit the appalling treatment of Muslims in Western China”.

UK law review eyes abusive trends like deepfaked porn and cyber flashing

The UK government has announced the next phase of a review of the law around the making and sharing of non-consensual intimate images, with ministers saying they want to ensure it keeps pace with evolving digital tech trends.

The review is being initiated in response to concerns that abusive and offensive communications are on the rise, as a result of it becoming easier to create and distribute sexual images of people online without their permission.

Among the issues the Law Commission will consider are so-called ‘revenge porn’, where intimate images of a person are shared without their consent; deepfaked porn, which refers to superimposing a real photograph of a person’s face onto a pornographic image or video without their consent; and cyber flashing, the unpleasant practice of sending unsolicited sexual images to a person’s phone by exploiting technologies such as Bluetooth that allow for proximity-based file sharing.

On the latter practice, the screengrab below is of one of two unsolicited messages I received as pop-ups on my phone in the space of a few seconds while waiting at a UK airport gate — and before I’d had a chance to locate the iOS master setting that actually nixes Bluetooth.

On iOS, even without accepting the AirDrop the cyberflasher is still able to send an unsolicited placeholder image with their request.

Safe to say, this example is at the tamer end of what tends to be involved. More often it’s actual dick pics fired at people’s phones, not a parrot-friendly silicone substitute…

cyber flashing

A patchwork of UK laws already covers at least some of the offensive and abusive communications in question, such as the offence of voyeurism under the Sexual Offences Act 2003, which criminalises certain non-consensual photography taken for sexual gratification — and carries a two-year maximum prison sentence (with the possibility that a perpetrator may be required to be listed on the sexual offender register); while revenge porn was made a criminal offence under section 33 of the Criminal Justice and Courts Act 2015.

But the government says that while it feels the law in this area is “robust”, it is keen not to be seen as complacent — hence continuing to keep it under review.

It will also hold a public consultation to help assess whether changes in the law are required.

The Law Commission published Phase 1 of their review of Abusive and Offensive Online Communications on November 1 last year — a scoping report setting out the current criminal law which applies.

The second phase, announced today, will consider the non-consensual taking and sharing of intimate images specifically — and look at possible recommendations for reform. Though it will not report for two years so any changes to the law are likely to take several years to make it onto the statute books.

Among specific issues the Law Commission will consider is whether anonymity should automatically be granted to victims of revenge porn.

Commenting in a statement, justice minister Paul Maynard said: “No one should have to suffer the immense distress of having intimate images taken or shared without consent. We are acting to make sure our laws keep pace with emerging technology and trends in these disturbing and humiliating crimes.”

Maynard added that the review builds on recent changes to toughen UK laws around revenge porn and to outlaw ‘upskirting’ in English law; aka the degrading practice of taking intimate photographs of others without consent.

“Too many young people are falling victim to co-ordinated abuse online or the trauma of having their private sexual images shared. That’s not the online world I want our children to grow up in,” added the secretary of state for digital issues, Jeremy Wright, in another supporting statement.

“We’ve already set out world-leading plans to put a new duty of care on online platforms towards their users, overseen by an independent regulator with teeth. This Review will ensure that the current law is fit for purpose as we deliver our commitment to make the UK the safest place to be online.”

The Law Commission review will begin on July 1, 2019 and report back to the government in summer 2021.

Terms of Reference will be published on the Law Commission’s website in due course.

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office.

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse which will be covered by further stringent requirements under the plan.

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although it reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, after which it says it will set out the action it will take in developing its final proposals for legislation.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident at this stage that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own.

The House of Lords committee was another that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”. And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle.

But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

YouTube tightens restrictions on channel of UK far right activist — but no ban

YouTube has placed new restrictions on the channel of a UK far right activist which are intended to make hate speech less easy to discover on its platform.

Restrictions on Stephen Yaxley-Lennon’s YouTube channel include removing certain of his videos from recommendations. YouTube is also taking away his ability to livestream to his now close to 390,000 YouTube channel subscribers.

Yaxley-Lennon, who goes by the name ‘Tommy Robinson’ on social media, was banned from Twitter a year ago.

Buzzfeed first reported the new restrictions. A YouTube spokesperson confirmed the shift in policy, telling us: “After consulting with third party experts, we are applying a tougher treatment to Tommy Robinson’s channel in keeping with our policies on borderline content. The content will be placed behind an interstitial, removed from recommendations, and stripped of key features including livestreaming, comments, suggested videos, and likes.”

Test searches for ‘Tommy Robinson’ on YouTube now return a series of news reports — instead of Yaxley-Lennon’s own channel, as was the case just last month.

YouTube had already demonetized Yaxley-Lennon’s channel back in January for violating its ad policies.

But as we reported last month Google has been under increasing political pressure in the UK to tighten its policies over the far right activist.

The policy shift applies to videos uploaded by Yaxley-Lennon that aren’t illegal or otherwise in breach of YouTube’s community standards (as the company applies them) but which have nonetheless been flagged by users as potential violations of the platform’s policies on hate speech and violent extremism.

In such instances YouTube says it will review the videos and those not in violation of its policies but which nonetheless contain controversial religious or extremist content will be placed behind an interstitial, removed from recommendations, and stripped of key features including comments, suggested videos, and likes.

Such videos will also not be eligible for monetization.

The company says its goal with the stricter approach to Yaxley-Lennon’s content is to strike a balance between upholding free expression and a point of public and historic record, while also keeping hateful content from being spread or recommended to others.

YouTube said it carefully considered Yaxley-Lennon’s case — consulting with external experts and UK academics — before deciding it needed to take tougher treatment.

Affected videos will still remain on YouTube — albeit behind an interstitial. They also won’t be recommended, and will be stripped of the usual social features including comments, suggested videos, and likes.

Of course it remains to be seen how tightly YouTube will apply the new more restrictive policy in this case. And whether Yaxley-Lennon himself will adapt his video strategy to workaround tighter rules on that channel.

The far right is very well versed in using coded language and dog whistle tactics to communicate with its followers and spread racist messages under the mainstream radar.

Yaxley-Lennon has had a presence on multiple social media channels, adapting the content to the different platforms. Though YouTube is the last mainstream channel still available to him after Facebook kicked him off its platform in February. Albeit, he was quickly able to workaround Facebook’s ban simply by using a friend’s Facebook account to livestream himself harassing a journalist at his home late at night.

Police were called out twice in that instance. And in a vlog uploaded to YouTube after the incident Yaxley-Lennon threatened other journalists to “expect a knock at the door”.

Shortly afterwards the deputy leader of the official opposition raised his use of YouTube to livestream harassment in parliament, telling MPs then that: “Every major social media platform other than YouTube has taken down Stephen Yaxley-Lennon’s profile because of his hateful conduct.”

The secretary of state for digital, Jeremy Wright, responded by urging YouTube to “reconsider their judgement” — saying: “We all believe in freedom of speech. But we all believe too that that freedom of speech has limits. And we believe that those who seek to intimidate others, to potentially of course break the law… that is unacceptable. That is beyond the reach of the type of freedom of speech that we believe should be protected.”

YouTube claims it removes videos that violate its hate speech and violent content policies. But in previous instances involving Yaxley-Lennon it has told us that specific videos of his — including the livestreamed harassment that was raised in parliament — do not constitute a breach of its standards.

It’s now essentially admitting that those standards are too weak in instances of weaponized hate.

Yaxley-Lennon, a former member of the neo-nazi British National Party and one of the founders of the far right, Islamophobic English Defence League, has used social media to amplify his message of hate while also soliciting donations to fund individual far right ‘activism’ — under the ‘Tommy Robinson’ moniker.

The new YouTube restrictions could reduce his ability to leverage the breadth of Google’s social platform to reach a wider and more mainstream audience than he otherwise would.

Albeit, it remains trivially easy for anyone who already knows the ‘Tommy Robinson’ ‘brand’ to workaround the YouTube restrictions by using another mainstream Google-owned technology. A simple Google search for “Tommy Robinson YouTube channel” returns direct links to his channel and content at the top of search results. 

Yaxley-Lennon’s followers will also continue to be able to find and share his YouTube content by sharing direct links to it — including on mainstream social platforms.

Though the livestream ban is a significant restriction — if it’s universally applied to the channel — which will make it harder for Yaxley-Lennon to communicate instantly at a distance with followers in his emotive vlogging medium of choice.

He has used the livestreaming medium skilfully to amplify and whip up hate while presenting himself to his followers as a family man afraid for his wife and children. (For the record: Yaxley-Lennon’s criminal record includes convictions for violence, public order offences, drug possession, financial and immigration frauds, among other convictions.)

If Google is hoping to please everyone by applying a ‘third route’ of tighter restrictions for a hate speech weaponizer yet no total ban it will likely just end up pleasing no one and taking flak from both sides.

The company does point out it removes channels of proscribed groups and any individuals formally linked to such groups. And in this case the related far right groups have not been proscribed by the UK government. So the UK government could certainly do much more to check the rise of domestic far right hate.

But YouTube could also step up and take a leadership position by setting robust policies against individuals who seek to weaponize hate.

Instead it continues to fiddle around the edges — trying to fudge the issue by claiming it’s about ‘balancing’ speech and community safety.

In truth hate speech suppresses the speech of those it targets with harassment. So if social networks really want to maximize free speech across their communities they have to be prepared to weed out bad actors who would shrink the speech of minorities by weaponizing hate against them.