All posts in “age verification”

Snap is under NDA with UK Home Office discussing how to centralize age checks online

Snap is under NDA with the UK’s Home Office as part of a working group tasked with coming up with more robust age verification technology that’s able to robustly identify children online.

The detail emerged during a parliamentary committee hearing as MPs in the Department for Digital, Culture, Media and Sport (DCMS) questioned Stephen Collins, Snap’s senior director for public policy international, and Will Scougal, director of creative strategy EMEA.

A spokesman in the Home Office press office hadn’t immediately heard of any discussions with the messaging company on the topic of age verification. But we’ll update this story with any additional context on the department’s plans if more info is forthcoming.

Under questioning by the committee Snap conceded its current age verification systems are not able to prevent under 13 year olds from signing up to use its messaging platform.

The DCMS committee’s interest here is it’s running an enquiry into immersive and addictive technologies.

Snap admitted that the most popular means of signing up to its app (i.e. on mobile) is where its age verification system is weakest, with Collins saying it had no ability to drop a cookie to keep track of mobile users to try to prevent repeat attempts to get around its age gate.

But he emphasized Snap does not want underage users on its platform.

“That brings us no advantage, that brings us no commercial benefit at all,” he said. “We want to make it an enjoyable place for everybody using the platform.”

He also said Snap analyzes patterns of user behavior to try to identify underage users — investigating accounts and banning those which are “clearly” determined not to be old enough to use the service.

But he conceded there’s currently “no foolproof way” to prevent under 13s from signing up.

Discussing alternative approaches to verifying kids’ age online the Snap policy staffer agreed parental consent approaches are trivially easy for children to circumvent — such as by setting up spoof email accounts or taking a photo of a parent’s passport or credit card to use for verification.

Social media company Facebook is one such company that relies a ‘parental consent’ system to ‘verify’ the age of teen users — though, as we’ve previously reported, it’s trivially easy for kids to workaround.

“I think the most sustainable solution will be some kind of central verification system,” Collins suggested, adding that such a system is “already being discussed” by government ministers.

“The home secretary has tasked the Home Office and related agencies to look into this — we’re part of that working group,” he continued.

“We actually met just yesterday. I can’t give you the details here because I’m under an NDA,” Collins added, suggesting Snap could send the committee details in writing.

“I think it’s a serious attempt to really come to a proper conclusion — a fitting conclusion to this kind of conundrum that’s been there, actually, for a long time.”

“There needs to be a robust age verification system that we can all get behind,” he added.

The UK government is expected to publish a White Paper setting out its policy ideas for regulating social media and safety before the end of the winter.

The detail of its policy plans remain under wraps so it’s unclear whether the Home Office intends to include setting up a centralized system of online age verification for robustly identifying kids on social media platforms as part of its safety-focused regulation. But much of the debate driving the planned legislation has fixed on content risks for kids online.

Such a step would also not be the first time UK ministers have pushed the envelop around online age verification.

A controversial system of age checks for viewing adult content is due to come into force shortly in the UK under the Digital Economy Act — albeit, after a lengthy delay. (And ignoring all the hand-wringing about privacy and security risks; not to mention the fact age checks will likely be trivially easy to dodge by those who know how to use a VPN etc, or via accessing adult content on social media.)

But a centralized database of children for age verification purposes — if that is indeed the lines along which the Home Office is thinking — sounds rather closer to Chinese government Internet controls.

Given that, in recent years, the Chinese state has been pushing games companies to age verify users to enforce limits on play time for kids (also apparently in response to health concerns around video gaming addiction).

The UK has also pushed to create centralized databases of web browsers’ activity for law enforcement purposes, under the 2016 Investigatory Powers Act. (Parts of which it’s had to rethink following legal challenges, with other legal challenges ongoing.)

In recent years it has also emerged that UK spy agencies maintain bulk databases of citizens — known as ‘bulk personal datasets‘ — regardless of whether a particular individual is suspected of a crime.

So building yet another database to contain children’s ages isn’t perhaps as off piste as you might imagine for the country.

Returning to the DCMS committee’s enquiry, other questions for Snap from MPs included several critical ones related to its ‘streaks’ feature — whereby users who have been messaging each other regularly are encouraged not to stop the back and forth.

The parliamentarians raised constituent and industry concerns about the risk of peer pressure being piled on kids to keep the virtual streaks going.

Snap’s reps told the committee the feature is intended to be a “celebration” of close friendship, rather than being intentionally designed to make the platform sticky and so encourage stress.

Though they conceded users have no way to opt out of streak emoji appearing.

They also noted they have previously reduced the size of the streak emoji to make it less prominent.

But they added they would take concerns back to product teams and re-examine the feature in light of the criticism.

You can watch the full committee hearing with Snap here.

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.