All posts in “Uk Government”

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office.

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse which will be covered by further stringent requirements under the plan.

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although it reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, after which it says it will set out the action it will take in developing its final proposals for legislation.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident at this stage that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own.

The House of Lords committee was another that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”. And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle.

But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

YouTube tightens restrictions on channel of UK far right activist — but no ban

YouTube has placed new restrictions on the channel of a UK far right activist which are intended to make hate speech less easy to discover on its platform.

Restrictions on Stephen Yaxley-Lennon’s YouTube channel include removing certain of his videos from recommendations. YouTube is also taking away his ability to livestream to his now close to 390,000 YouTube channel subscribers.

Yaxley-Lennon, who goes by the name ‘Tommy Robinson’ on social media, was banned from Twitter a year ago.

Buzzfeed first reported the new restrictions. A YouTube spokesperson confirmed the shift in policy, telling us: “After consulting with third party experts, we are applying a tougher treatment to Tommy Robinson’s channel in keeping with our policies on borderline content. The content will be placed behind an interstitial, removed from recommendations, and stripped of key features including livestreaming, comments, suggested videos, and likes.”

Test searches for ‘Tommy Robinson’ on YouTube now return a series of news reports — instead of Yaxley-Lennon’s own channel, as was the case just last month.

YouTube had already demonetized Yaxley-Lennon’s channel back in January for violating its ad policies.

But as we reported last month Google has been under increasing political pressure in the UK to tighten its policies over the far right activist.

The policy shift applies to videos uploaded by Yaxley-Lennon that aren’t illegal or otherwise in breach of YouTube’s community standards (as the company applies them) but which have nonetheless been flagged by users as potential violations of the platform’s policies on hate speech and violent extremism.

In such instances YouTube says it will review the videos and those not in violation of its policies but which nonetheless contain controversial religious or extremist content will be placed behind an interstitial, removed from recommendations, and stripped of key features including comments, suggested videos, and likes.

Such videos will also not be eligible for monetization.

The company says its goal with the stricter approach to Yaxley-Lennon’s content is to strike a balance between upholding free expression and a point of public and historic record, while also keeping hateful content from being spread or recommended to others.

YouTube said it carefully considered Yaxley-Lennon’s case — consulting with external experts and UK academics — before deciding it needed to take tougher treatment.

Affected videos will still remain on YouTube — albeit behind an interstitial. They also won’t be recommended, and will be stripped of the usual social features including comments, suggested videos, and likes.

Of course it remains to be seen how tightly YouTube will apply the new more restrictive policy in this case. And whether Yaxley-Lennon himself will adapt his video strategy to workaround tighter rules on that channel.

The far right is very well versed in using coded language and dog whistle tactics to communicate with its followers and spread racist messages under the mainstream radar.

Yaxley-Lennon has had a presence on multiple social media channels, adapting the content to the different platforms. Though YouTube is the last mainstream channel still available to him after Facebook kicked him off its platform in February. Albeit, he was quickly able to workaround Facebook’s ban simply by using a friend’s Facebook account to livestream himself harassing a journalist at his home late at night.

Police were called out twice in that instance. And in a vlog uploaded to YouTube after the incident Yaxley-Lennon threatened other journalists to “expect a knock at the door”.

Shortly afterwards the deputy leader of the official opposition raised his use of YouTube to livestream harassment in parliament, telling MPs then that: “Every major social media platform other than YouTube has taken down Stephen Yaxley-Lennon’s profile because of his hateful conduct.”

The secretary of state for digital, Jeremy Wright, responded by urging YouTube to “reconsider their judgement” — saying: “We all believe in freedom of speech. But we all believe too that that freedom of speech has limits. And we believe that those who seek to intimidate others, to potentially of course break the law… that is unacceptable. That is beyond the reach of the type of freedom of speech that we believe should be protected.”

YouTube claims it removes videos that violate its hate speech and violent content policies. But in previous instances involving Yaxley-Lennon it has told us that specific videos of his — including the livestreamed harassment that was raised in parliament — do not constitute a breach of its standards.

It’s now essentially admitting that those standards are too weak in instances of weaponized hate.

Yaxley-Lennon, a former member of the neo-nazi British National Party and one of the founders of the far right, Islamophobic English Defence League, has used social media to amplify his message of hate while also soliciting donations to fund individual far right ‘activism’ — under the ‘Tommy Robinson’ moniker.

The new YouTube restrictions could reduce his ability to leverage the breadth of Google’s social platform to reach a wider and more mainstream audience than he otherwise would.

Albeit, it remains trivially easy for anyone who already knows the ‘Tommy Robinson’ ‘brand’ to workaround the YouTube restrictions by using another mainstream Google-owned technology. A simple Google search for “Tommy Robinson YouTube channel” returns direct links to his channel and content at the top of search results. 

Yaxley-Lennon’s followers will also continue to be able to find and share his YouTube content by sharing direct links to it — including on mainstream social platforms.

Though the livestream ban is a significant restriction — if it’s universally applied to the channel — which will make it harder for Yaxley-Lennon to communicate instantly at a distance with followers in his emotive vlogging medium of choice.

He has used the livestreaming medium skilfully to amplify and whip up hate while presenting himself to his followers as a family man afraid for his wife and children. (For the record: Yaxley-Lennon’s criminal record includes convictions for violence, public order offences, drug possession, financial and immigration frauds, among other convictions.)

If Google is hoping to please everyone by applying a ‘third route’ of tighter restrictions for a hate speech weaponizer yet no total ban it will likely just end up pleasing no one and taking flak from both sides.

The company does point out it removes channels of proscribed groups and any individuals formally linked to such groups. And in this case the related far right groups have not been proscribed by the UK government. So the UK government could certainly do much more to check the rise of domestic far right hate.

But YouTube could also step up and take a leadership position by setting robust policies against individuals who seek to weaponize hate.

Instead it continues to fiddle around the edges — trying to fudge the issue by claiming it’s about ‘balancing’ speech and community safety.

In truth hate speech suppresses the speech of those it targets with harassment. So if social networks really want to maximize free speech across their communities they have to be prepared to weed out bad actors who would shrink the speech of minorities by weaponizing hate against them.

UK report blasts Huawei for network security incompetence

The latest report by a UK oversight body set up to evaluation Chinese networking giant Huawei’s approach to security has dialled up pressure on the company, giving a damning assessment of what it describes as “serious and systematic defects” in its software engineering and cyber security competence.

Although the report falls short of calling for an outright ban on Huawei equipment in domestic networks — an option U.S. president Trump continues dangling across the pond.

The report, prepared for the National Security Advisor of the UK by the Huawei Cyber Security Evaluation Centre (HCSEC) Oversight Board, also identifies new “significant technical issues” which it says lead to new risks for UK telecommunications networks using Huawei kit.

The HCSEC was set up by Huawei in 2010, under what the oversight board couches as “a set of arrangements with the UK government”, to provide information to state agencies on its products and strategies in order that security risks could be evaluated.

And last year, under pressure from UK security agencies concerned about technical deficiencies in its products, Huawei pledged to spend $2BN to try to address long-running concerns about its products in the country.

But the report throws doubt on its ability to address UK concerns — with the board writing that it has “not yet seen anything to give it confidence in Huawei’s capacity to successfully complete the elements of its transformation programme that it has proposed as a means of addressing these underlying defects”.

So it sounds like $2BN isn’t going to be nearly enough to fix Huawei’s security problem in just one European country.

The board also writes that it will require “sustained evidence” of better software engineering and cyber security “quality”, verified by HCSEC and the UK’s National Cyber Security Centre (NCSC), if there’s to be any possibility of it reaching a different assessment of the company’s ability to reboot its security credentials.

While another damning assessment contained in the report is that Huawei has made “no material progress” on issues raised by last year’s report.

All the issues identified by the security evaluation process relate to “basic engineering competence and cyber security hygiene”, which the board notes gives rise to vulnerabilities capable of being exploited by “a range of actors”.

It adds that the NCSC does not believe the defects found are a result of Chinese state interference.

This year’s report is the fifth the oversight board has produced since it was established in 2014, and it comes at a time of acute scrutiny for Huawei, as 5G network rollouts are ramping up globally — pushing governments to address head on suspicions attached to the Chinese giant and consider whether to trust it with critical next-gen infrastructure.

“The Oversight Board advises that it will be difficult to appropriately risk-manage future products in the context of UK deployments, until the underlying defects in Huawei’s software engineering and cyber security processes are remediated,” the report warns in one of several key conclusions that make very uncomfortable reading for Huawei.

“Overall, the Oversight Board can only provide limited assurance that all risks to UK national security from Huawei’s involvement in the UK’s critical networks can be sufficiently mitigated long-term,” it adds in summary.

Reached for its response to the report, a Huawei UK spokesperson sent us a statement in which it describes the $2BN earmarked for security improvements related to UK products as an “initial budget”.

It writes:

The 2019 OB [oversight board] report details some concerns about Huawei’s software engineering capabilities. We understand these concerns and take them very seriously. The issues identified in the OB report provide vital input for the ongoing transformation of our software engineering capabilities. In November last year Huawei’s Board of Directors issued a resolution to carry out a companywide transformation programme aimed at enhancing our software engineering capabilities, with an initial budget of US$2BN.

A high-level plan for the programme has been developed and we will continue to work with UK operators and the NCSC during its implementation to meet the requirements created as cloud, digitization, and software-defined everything become more prevalent. To ensure the ongoing security of global telecom networks, the industry, regulators, and governments need to work together on higher common standards for cyber security assurance and evaluation.

Seeking to find something positive to salvage from the report’s savaging, Huawei suggests it demonstrates the continued effectiveness of the HCSEC as a structure to evaluate and mitigate security risk — flagging a description where the board writes that it’s “arguably the toughest and most rigorous in the world”, and which Huawei claims shows at least there hasn’t been any increase in vulnerability of UK networks since the last report.

Though the report does identify new issues that open up fresh problems — albeit the underlying issues were presumably there last year too, just laying undiscovered.

The board’s withering assessment certainly amps up the pressure on Huawei which has been aggressively battling U.S.-led suspicion of its kit — claiming in a telecoms conference speech last month that “the U.S. security accusation of our 5G has no evidence”, for instance.

At the same time it has been appealing for the industry to work together to come up with collective processes for evaluating the security and trustworthiness of network kit.

And earlier this month it opened another cyber security transparency center — this time at the heart of Europe in Brussels, where the company has been lobbying policymakers to help establish security standards to foster collective trust. Though there’s little doubt that’s a long game.

Meanwhile, critics of Huawei can now point to impatience rising in the U.K., despite comments by the head of the NCSC, Ciaran Martin, last month — who said then that security agencies believe the risk of using Huawei kit can be managed, suggesting the government won’t push for an outright ban.

The report does not literally overturn that view but it does blast out a very loud and alarming warning about the difficulty for UK operators to “appropriately” risk-manage what’s branded defective and vulnerable Huawei kit. Including flagging the risk of future products — which the board suggests will be increasingly complex to manage. All of which could well just push operators to seek alternatives.

On the mitigation front, the board writes that — “in extremis” — the NCSC could order Huawei to carry out specific fixes for equipment currently installed in the UK. Though it also warns that such a step would be difficult, and could for example require hardware replacement which may not mesh with operators “natural” asset management and upgrades cycles, emphasizing it does not offer a sustainable solution to the underlying technical issues.

“Given both the shortfalls in good software engineering and cyber security practice and the currently unknown trajectory of Huawei’s R&D processes through their announced transformation plan, it is highly likely that security risk management of products that are new to the UK or new major releases of software for products currently in the UK will be more difficult,” the board writes in a concluding section discussing the UK national security risk.

“On the basis of the work already carried out by HCSEC, the NCSC considers it highly likely that there would be new software engineering and cyber security issues in products HCSEC has not yet examined.”

It also describes the number and severity of vulnerabilities plus architectural and build issues discovered by a relatively small team in the HCSEC as “a particular concern”.

“If an attacker has knowledge of these vulnerabilities and sufficient access to exploit them, they may be able to affect the operation of the network, in some cases causing it to cease operating correctly,” it warns. “Other impacts could include being able to access user traffic or reconfiguration of the network elements.”

In another section on mitigating risks of using Huawei kit, the board notes that “architectural controls” in place in most UK operators can limit the ability of attackers to exploit any vulnerable network elements not explicitly exposed to the public Internet — adding that such controls, combined with good opsec generally, will “remain critically important in the coming years to manage the residual risks caused by the engineering defects identified”.

In other highlights from the report the board does have some positive things to say, writing that an NCSC technical review of its capabilities showed improvements in 2018, while another independent audit of HCSEC’s ability to operate independently of Huawei HQ once again found “no high or medium priority findings”.

“The audit report identified one low-rated finding, relating to delivery of information and equipment within agreed Service Level Agreements. Ernst & Young concluded that there were no major concerns and the Oversight Board is satisfied that HCSEC is operating in line with the 2010 arrangements between HMG and the company,” it further notes.

Last month the European Commissioner said it was preparing to step in to ensure a “common approach” across the European Union where 5G network security is concerned — warning of the risk of fragmentation across the single market. Though it has so far steered clear of any bans.

Earlier this week it issued a set of recommendations for Member States, combining legislative and policy measures to assess 5G network security risks and help strengthen preventive measures.

Among the operational measures it suggests Member States take is to complete a national risk assessment of 5G network infrastructures by the end of June 2019, and follow that by updating existing security requirements for network providers — including conditions for ensuring the security of public networks.

“These measures should include reinforced obligations on suppliers and operators to ensure the security of the networks,” it recommends. “The national risk assessments and measures should consider various risk factors, such as technical risks and risks linked to the behaviour of suppliers or operators, including those from third countries. National risk assessments will be a central element towards building a coordinated EU risk assessment.”  

At an EU level the Commission said Member States should share information on network security, saying this “coordinated work should support Member States’ actions at national level and provide guidance to the Commission for possible further steps at EU level” — leaving the door open for further action.

While the EU’s executive body has not pushed for a pan-EU ban on any 5G vendors it did restate Member States’ right to exclude companies from their markets for national security reasons if they fail to comply with their own standards and legal framework.

Snap is under NDA with UK Home Office discussing how to centralize age checks online

Snap is under NDA with the UK’s Home Office as part of a working group tasked with coming up with more robust age verification technology that’s able to robustly identify children online.

The detail emerged during a parliamentary committee hearing as MPs in the Department for Digital, Culture, Media and Sport (DCMS) questioned Stephen Collins, Snap’s senior director for public policy international, and Will Scougal, director of creative strategy EMEA.

A spokesman in the Home Office press office hadn’t immediately heard of any discussions with the messaging company on the topic of age verification. But we’ll update this story with any additional context on the department’s plans if more info is forthcoming.

Under questioning by the committee Snap conceded its current age verification systems are not able to prevent under 13 year olds from signing up to use its messaging platform.

The DCMS committee’s interest here is it’s running an enquiry into immersive and addictive technologies.

Snap admitted that the most popular means of signing up to its app (i.e. on mobile) is where its age verification system is weakest, with Collins saying it had no ability to drop a cookie to keep track of mobile users to try to prevent repeat attempts to get around its age gate.

But he emphasized Snap does not want underage users on its platform.

“That brings us no advantage, that brings us no commercial benefit at all,” he said. “We want to make it an enjoyable place for everybody using the platform.”

He also said Snap analyzes patterns of user behavior to try to identify underage users — investigating accounts and banning those which are “clearly” determined not to be old enough to use the service.

But he conceded there’s currently “no foolproof way” to prevent under 13s from signing up.

Discussing alternative approaches to verifying kids’ age online the Snap policy staffer agreed parental consent approaches are trivially easy for children to circumvent — such as by setting up spoof email accounts or taking a photo of a parent’s passport or credit card to use for verification.

Social media company Facebook is one such company that relies a ‘parental consent’ system to ‘verify’ the age of teen users — though, as we’ve previously reported, it’s trivially easy for kids to workaround.

“I think the most sustainable solution will be some kind of central verification system,” Collins suggested, adding that such a system is “already being discussed” by government ministers.

“The home secretary has tasked the Home Office and related agencies to look into this — we’re part of that working group,” he continued.

“We actually met just yesterday. I can’t give you the details here because I’m under an NDA,” Collins added, suggesting Snap could send the committee details in writing.

“I think it’s a serious attempt to really come to a proper conclusion — a fitting conclusion to this kind of conundrum that’s been there, actually, for a long time.”

“There needs to be a robust age verification system that we can all get behind,” he added.

The UK government is expected to publish a White Paper setting out its policy ideas for regulating social media and safety before the end of the winter.

The detail of its policy plans remain under wraps so it’s unclear whether the Home Office intends to include setting up a centralized system of online age verification for robustly identifying kids on social media platforms as part of its safety-focused regulation. But much of the debate driving the planned legislation has fixed on content risks for kids online.

Such a step would also not be the first time UK ministers have pushed the envelop around online age verification.

A controversial system of age checks for viewing adult content is due to come into force shortly in the UK under the Digital Economy Act — albeit, after a lengthy delay. (And ignoring all the hand-wringing about privacy and security risks; not to mention the fact age checks will likely be trivially easy to dodge by those who know how to use a VPN etc, or via accessing adult content on social media.)

But a centralized database of children for age verification purposes — if that is indeed the lines along which the Home Office is thinking — sounds rather closer to Chinese government Internet controls.

Given that, in recent years, the Chinese state has been pushing games companies to age verify users to enforce limits on play time for kids (also apparently in response to health concerns around video gaming addiction).

The UK has also pushed to create centralized databases of web browsers’ activity for law enforcement purposes, under the 2016 Investigatory Powers Act. (Parts of which it’s had to rethink following legal challenges, with other legal challenges ongoing.)

In recent years it has also emerged that UK spy agencies maintain bulk databases of citizens — known as ‘bulk personal datasets‘ — regardless of whether a particular individual is suspected of a crime.

So building yet another database to contain children’s ages isn’t perhaps as off piste as you might imagine for the country.

Returning to the DCMS committee’s enquiry, other questions for Snap from MPs included several critical ones related to its ‘streaks’ feature — whereby users who have been messaging each other regularly are encouraged not to stop the back and forth.

The parliamentarians raised constituent and industry concerns about the risk of peer pressure being piled on kids to keep the virtual streaks going.

Snap’s reps told the committee the feature is intended to be a “celebration” of close friendship, rather than being intentionally designed to make the platform sticky and so encourage stress.

Though they conceded users have no way to opt out of streak emoji appearing.

They also noted they have previously reduced the size of the streak emoji to make it less prominent.

But they added they would take concerns back to product teams and re-examine the feature in light of the criticism.

You can watch the full committee hearing with Snap here.

Competition policy must change to help startups fight ‘winner takes all’ platforms, says UK report

A independent report commissioned by the UK government to examine how competition policy needs to adapt itself for the digital age has concluded that tech giants don’t face adequate competition and the law needs updating to address what it dubs the “novel” challenges of ‘winner takes all’ platforms.

The panel also recommends more policy interventions to actively support startups, including a code of conduct for “the most significant digital platforms”; and measures to foster data portability, open standards and interoperability to help generate competitive momentum for rival innovations.

UK chancellor Philip Hammond announced the competition market review last summer, saying the government was committed to asking “the big questions about how we ensure these new digital markets work for everyone”.

The culmination of the review — a 150-page report, published today, entitled Unlocking digital competition — is the work of the government’s digital competition expert panel which is chaired by former U.S. president Barack Obama’s chief economic advisor, professor Jason Furman.

“The digital sector has created substantial benefits but these have come at the cost of increasing dominance of a few companies which is limiting competition and consumer choice and innovation. Some say this is inevitable or even desirable. I think the UK can do better,” Furman said today in a statement.

In the report the panel writes that it believes competition policy should be “given the tools to tackle new challenges, not radically shifted away from its established basis”.

“In particular, policy should remain based on careful weighing of economic evidence and models,” they suggest, arguing also that “consumer welfare” remains the “appropriate perspective to motivate competition policy” — and rejecting the idea that a completely new approach is needed.

But, crucially, their view of consumer welfare is a broad church, not a narrow price trench — with the report asserting that a consumer welfare basis to competition law is able to also take account of other things, including (but also not limited to) “choice, quality and innovation”. 

Furman said the panel, which was established in September 2018, has outlined “a balanced proposal to give people more control over their data, give small businesses more of a chance to enter and thrive, and create more predictability for the large digital companies”.

“These recommendations will deliver an economic boost driven by UK tech start-ups and innovation that will give consumers greater choice and protection,” he argues.

Commenting on the report’s publication, Hammond said: “Competition is fundamental to ensuring the market works in the interest of consumers, but we know some tech giants are still accumulating too much power, preventing smaller businesses from entering the market,” adding that: “The work of Jason Furman and the expert panel is invaluable in ensuring we’re at the forefront of delivering a competitive digital marketplace.”

The chancellor said that the government will “carefully examine” the proposals and respond later this year — with a plan for implementing changes he said are necessary “to ensure our digital markets are competitive and consumers get the level of choice they deserve”.

Pro-startup regulation required

The panel rejects the view — mostly loudly propounded by tech giants and their lobbying vehicles — that competition is thriving online, ergo no competition policy changes are needed.

It also rejects the argument that digital platforms are “natural monopolies” and competition is impossible — dismissing the idea of imposing utility-like regulation, such as in the energy sector.

Instead, the panel writes that it sees “greater competition among digital platforms as not only necessary but also possible — provided the right policies are in place”. The biggest “missing set of policies” are ones that would “actively help foster competition”, it argues in the report’s introduction.

“Instead of just relying on traditional competition tools, the UK should take a forward-looking approach that creates and enforces a clear set of rules to limit anti-competitive actions by the most significant digital platforms while also reducing structural barriers that currently hinder effective competition,” the panel goes on to say, calling for new rules to tackle ‘winner take all’ tech platforms that are based on “generally agreed principles and developed into more specific codes of conduct with the participation of a wide range of stakeholders”. 

Coupled with active policy efforts to support startups and scale-ups — by making it easier for consumers to move their data across digital services; pushing for systems to be built around open standards; and for data held by tech giants to be made available for competitors — the suggested reforms would support a system that’s “more flexible, predictable and timely” than the current regime, they assert.

Among the panel’s specific recommendations are a call to set up a new competition unit with expertise in technology, economics and behavioural science, plus the legal powers to back it up.

The panel envisages this unit focusing on giving users more control over their data — to foster platform switching — as well as developing a code of competitive conduct that would apply to the largest platforms. “This would be applied only to particularly powerful companies, those deemed to have ‘strategic market status’, in order to avoid creating new burdens or barriers for smaller firms,” they write.

Another recommendation is to beef up regulators’ existing powers for tackling illegal anti-competitive practices — to make it quicker and simpler to prosecute breaches, with the report highlighting bullying tactics by market leaders as a current problem.

“There is nothing inherently wrong about being a large company or a monopoly and, in fact, in many cases this may reflect efficiencies and benefits for consumers or businesses. But dominant companies have a particular responsibility not to abuse their position by unfairly protecting, extending or exploiting it,” they write. “Existing antitrust enforcement, however, can often be slow, cumbersome, and unpredictable. This can be especially problematic in the fast-moving digital sector.

“That is why we are recommending changes that would enable more use of interim measures to prevent damage to competition while a case is ongoing, and adjusting appeal standards to balance protecting parties’ interests with the need for the competition authority to have usable tools and an appropriate margin of judgement. The goal is to place less reliance on large fines and drawn-out procedures, instead enabling faster action that more directly targets and remedies the problematic behavior.”

The expert panel also says changes to merger rules are required to enable the UK’s Competition and Markets Authority (CMA) to intervene to stop digital mergers that are likely to damage future competition, innovation and consumer choice — saying current decisions are too focused on short-term impacts.

“Over the last 10 years the 5 largest firms have made over 400 acquisitions globally. None has been blocked and very few have had conditions attached to approval, in the UK or elsewhere, or even been scrutinised by competition authorities,” they note.

More priority should be given to reviewing the potential implications of digital mergers, in their view.

Decisions on whether to approve mergers, by the CMA and other authorities, have often focused on short-term impacts. In dynamic digital markets, long-run effects are key to whether a merger will harm competition and consumers. Could the company that is being bought grow into a competitor to the platform? Is the source of its value an innovation that, under alternative ownership, could make the market less concentrated? Is it being bought for access to consumer data that will make the platform harder to challenge? In principle, all of these questions can inform merger decisions within the current, mainstream framework for competition, centred on consumer welfare. There is no need to shift away from this, or implement a blanket presumption against digital mergers, many of which may benefit consumers. Instead, these issues need to be considered more consistently and effectively in practice.

In part the CMA can achieve this through giving a higher priority to merger decisions in digital markets. These cases can be complex, but they affect markets that are critically important to consumers, providing services that shape the digital economy.

In another recommendation which targets the Google Facebook adtech duopoly, the report also calls for the CMA to launch a formal market study into the digital advertising market — which it notes suffers from a lack of transparency.

The panel also notes similar concerns raised by other recent reviews.

Digital advertising is increasingly driven by the use of consumers’ personal data for targeting. This in turn drives the competitive advantage for platforms able to learn more about more users’ identity, location and preferences. The market operates through a complex chain of advertising technology layers, where subsidiaries of the major platforms compete on opaque terms with third party businesses. This report joins the Cairncross Review and Digital, Culture, Media and Sport Committee in calling for the CMA to use its investigatory capabilities and powers to examine whether actors in these markets are operating appropriately to deliver effective competition and consumer benefit.

The report also calls for new powers to force the largest tech companies to open up to smaller firms by providing access to key data sets, albeit without infringing on individual privacy — citing Open Banking as a “notable” data mobility model that’s up and running.

“Open Banking provides an instructive example of how policy intervention can overcome technical and co-ordination challenges and misaligned incentives by creating an adequately funded body with the teeth to drive development and implementation by the nine largest financial institutions,” it suggests.

The panel urges the UK to engage internationally on the issue of digital regulation, writing that: “Many countries are considering policy changes in this area. The United Kingdom has the opportunity to lead by example, by helping to stimulate a global discussion that is based on the shared premise that competition is beneficial, competition is possible, but that we need to update our policies to protect and expand this competition for the sake of consumers and vibrant, dynamic economies.”

And in just one current example of the considerable chatter now going on around tech + competition, a House of Lords committee this week also recommended public interest tests for proposed tech mergers, and suggested an overarching digital regulator is needed to help plug legislative gaps and work through regulatory overlap.

Discussing the pros and cons of concentration in digital markets, the expert competition panel notes the efficiency and convenience that this dynamic can offer consumers and businesses, as well as potential gains via product innovation.

However the panel also points to what it says can be “substantial downsides” from digital market concentration, including erosion of consumer privacy; barriers to entry and scale for startups; and blocks to wider innovation, which it asserts can “outweigh any static benefits” — writing:

It can raise effective prices for consumers, reduce choice, or impact quality. Even when consumers do not have to pay anything for the service, it might have been that with more competition consumers would have given up less in terms of privacy or might even have been paid for their data. It can be harder for new companies to enter or scale up. Most concerning, it could impede innovation as larger companies have less to fear from new entrants and new entrants have a harder time bringing their products to market — creating a trade-off where the potential dynamic costs of concentration outweigh any static benefits.

The panel takes a clear view that “competition for the market cannot be counted on, by itself, to solve the problems associated with market tipping and ‘winner-takes-most’” — arguing that past regulatory interventions have helped shift market conditions, i.e. by facilitating the technology changes that created new markets and companies which led to dominant tech giants of old being unseated.

So, in other words, the panel believes government action can unlock market disruption — hence the report’s title — and that it’s too simplistic a narrative to claim technological change alone will reset markets.

For example, IBM’s dominance of hardware in the 1960s and early 1970s was rendered less important by the emergence of the PC and software. Microsoft’s dominance of operating systems and browsers gave way to a shift to the internet and an expansion of choice. But these changes were facilitated, in part, by government policy — in particular antitrust cases against these companies, without which the changes may never have happened.

The panel also argues there’s an acceleration of market dominance in the modern digital economy that makes it even more necessary for governments to respond, writing that “network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web”.

They also point to the risk of AI and machine learning technology leading to further market concentration, warning that “the companies most able to take advantage of [the next technological revolution] may well be the existing large companies because of the importance of data for the successful use of these tools”.

And while they suggest AI startups might offer a route to a competitive reset, via a substantial technology shift, there’s still currently no relief to be had from entrepreneurial efforts because of “the degree that entrants are acquired by the largest companies – with little or no scrutiny”.

Discussing other difficulties related to regulating big tech, the panel warns of the risk of regulators being “captured by the companies they are regulating”; as well as point out they are generally at a disadvantage vs the high tech innovators they are seeking to rule.

In a concluding chapter considering the possible impacts of their policy recommendations, the panel argues that successful execution of their approach could help foster startup innovation across a range of sectors and services.

“Across digital markets, implementing the recommendations will enable more new companies to turn innovative ideas into great new services and profitable businesses,” they suggest. “Some will continue to be acquired by large platforms, where that is the best route to bring new technology to a large group of users. Others will grow and operate alongside the large platforms. Digital services will be more diverse, more dynamic, with more specialisation and choice available for consumers wanting it. This could drive a flourishing of investment in these UK businesses.”

Citing some “potential examples” of services that could evolve in this more supportively competitive environment they suggest social content aggregators might arise that “bring together the best material from people’s friends across different platforms and sites”; “privacy services could give consumers a single simple place to manage the information they share across different platforms”; and also envisage independent ad tech businesses and changed market dynamics that can “rebalance the share of advertising revenue back towards publishers”.

The main envisaged benefits for consumers boil down to greater service and feature choice; enhanced privacy and transparency; and genuine control over the services they use and how they want to use them.

While for startups and scale-ups the panel sees open standards and access to data — and indeed effective enforcement, by the new digital markets unit — creating “a wide range of opportunities to develop and serve new markets adjacent to or interconnected with existing digital platforms”.

The combined impact should be to strengthen and deepen the competitive digital ecosystem, they believe.

Another envisaged benefit for startups is “trust in the framework and recognition that promising, innovative digital businesses will be protected from foreclosure or exclusion” — which they argue “should catalyse investment in UK digital businesses, driving the sector’s growth”.

“The changes to competition law… mean that where a business can grow into a successful competitor, that route to further growth is protected and companies will not in the future see being subsumed into a dominant platform as the only realistic business model,” they add.