All posts in “social media platforms”

Facebook agrees to clearer T&Cs in Europe

Facebook has agreed to amend its terms and conditions under pressure from EU lawmakers.

The new terms will make it plain that free access to its service is contingent on users’ data being used to profile them to target with ads, the European Commission said today.

“The new terms detail what services, Facebook sells to third parties that are based on the use of their user’s data, how consumers can close their accounts and under what reasons accounts can be disabled,” it writes.

Although the exact wording of the new terms has not yet been published, and the company has until the end of June 2019 to comply — so it remains to be seen how clear is ‘clear’.

Nonetheless the Commission is couching the concession as a win for consumers, trumpeting the forthcoming changes to Facebook’s T&C in a press release in which Vera Jourová, commissioner for justice, consumers and gender equality, writes:

Today Facebook finally shows commitment to more transparency and straight forward language in its terms of use. A company that wants to restore consumers trust after the Facebook/ Cambridge Analytica scandal should not hide behind complicated, legalistic jargon on how it is making billions on people’s data. Now, users will clearly understand that their data is used by the social network to sell targeted ads. By joining forces, the consumer authorities and the European Commission, stand up for the rights of EU consumers.

The change to Facebook’s T&Cs follows pressure applied to it in the wake of the Cambridge Analytica data misuse scandal, according to the Commission.

Along with national consumer protection authorities it says it asked Facebook to clearly inform consumers how the service gets financed and what revenues are derived from the use of consumer data as part of its response to the data-for-political-ads scandal.

“Facebook will introduce new text in its Terms and Services explaining that it does not charge users for its services in return for users’ agreement to share their data and to be exposed to commercial advertisements,” it writes. “Facebook’s terms will now clearly explain that their business model relies on selling targeted advertising services to traders by using the data from the profiles of its users.”

We reached out to Facebook with questions — including asking to see the wording of the new terms — but at the time of writing the company had declined to provide any response.

It’s also not clear whether the amended T&Cs will apply universally or only for Facebook users in Europe.

European commissioners have been squeezing social media platforms including Facebook over consumer rights issues since 2017 — when Facebook, Twitter and Google were warned the Commission was losing patience with their failure to comply with various consumer protection standards.

Aside from unclear language in their T&Cs, specific issues of concern for the Commission include terms that deprive consumers of their right to take a company to court in their own country or require consumers to waive mandatory rights (such as their right to withdraw from an online purchase).

Facebook has now agreed to several other T&Cs changes under pressure from the Commission, i.e. in addition to making it plainer that ‘if it’s free, you’re the product’.

Namely, the Commission says Facebook has agreed to: 1) amend its policy on limitation of liability — saying Facebook’s new T&Cs “acknowledges its responsibility in case of negligence, for instance in case data has been mishandled by third parties”; 2) amend its power to unilaterally change terms and conditions by “limiting it to cases where the changes are reasonable also taking into account the interest of the consumer”; 3) amend the rules concerning the temporary retention of content which has been deleted by consumers  — with content only able to be retained in “specific cases” (such as to comply with an enforcement request by an authority), and only for a maximum of 90 days when retained for “technical reasons”; and 4) amend the language clarifying the right to appeal of users when their content has been removed.

The Commission says it expects Facebook to make all the changes by the end of June at the latest — warning that the implementation will be closely monitored.

“If Facebook does not fulfil its commitments, national consumer authorities could decide to resort to enforcement measures, including sanctions,” it adds.

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office.

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse which will be covered by further stringent requirements under the plan.

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although it reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, after which it says it will set out the action it will take in developing its final proposals for legislation.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident at this stage that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own.

The House of Lords committee was another that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”. And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle.

But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

WhatsApp adds a tip-line for checking fakes in India ahead of elections

Facebook -owned messaging platform WhatsApp has launched a fact-checking tipline for users in India ahead of elections in the country.

The fact-checking service consists of a phone number (+91-9643-000-888) where users can send dubious messages if they think they might not be true or otherwise want them verified.

The messaging giant is working with a local media skilling startup, Proto, to run the fact-checking service — in conjunction with digital strategy consultancy Dig Deeper Media and San Francisco-based Meedan, which builds tools for journalists, to provide the platform for verifying submitted content, per TNW.

We’ve reached out to Proto and WhatsApp with questions.

The Economic Times of India reports that the startup intends to use the submitted messages to build a database to help study misinformation during elections for a research project commissioned and supported by WhatsApp.

“The goal of this project is to study the misinformation phenomenon at scale. As more data flows in, we will be able to identify the most susceptible or affected issues, locations, languages, regions, and more,” said Proto’s co-founders Ritvvij Parrikh and Nasr ul Hadi in a statement quoted by Reuters.

WhatsApp also told the news agency: “The challenge of viral misinformation requires more collaborative efforts and cannot be solved by any one organisation alone.”

According to local press reports, suspicious messages can be shared to the WhatsApp tipline in four regional languages, with the fact-checking service covering videos and pictures, as well as text. The submitter is also to confirm they want a fact-check and, on doing so, will get a subsequent response indicating if the shared message is classified as true, false, misleading, disputed or out of scope.

Other related information may also be provided, the Economic Times reports.

WhatsApp has faced major issues with fakes being spread on its end-to-end encrypted platform — a robust security technology that makes the presence of bogus and/or maliciously misleading content harder to spot and harder to manage since the platform itself does not have access to it.

The spread of fakes has become a huge problem for social media platforms generally. One that’s arguably most acute in markets where literacy (and digital literacy) rates can vary substantially. And in India WhatsApp fakes have led to some truly tragic outcomes — with multiple reports in recent years detailing how fast-spreading digital rumors sparked or fuelled mob violence that’s led to death and injury.

India’s general election, which is due to take place in several phases starting later this month until mid next, presents a more clearly defined threat — with the risk of a democratic process and outcome being manipulated by weaponized political disinformation.

WhatsApp’s platform is squarely in the frame given the app’s popularity in India.

It has also been accused of fuelling damaging political fakes during elections in Brazil last year, with Reuters reporting that the platform was flooded with falsehoods and conspiracy theories.

An outsized presence on social media appears to have aided the election of right winger Jair Bolsonaro. While the leftwing candidate he beat in a presidential runoff later claimed businessmen backing Bolsonaro paid to flood WhatsApp with misleading propaganda.

In India local press reports that politicians across the spectrum are being accused of seeking to manipulate the forthcoming elections by seeding fakes on the popular encrypted messaging platform.

It’s clear that WhatsApp offers a conduit for spreading unregulated and unaccountable propaganda at scale with even limited resources. So whether a tipline can offer a robust check against weaponized political disinformation very much remains to be seen.

There certainly look to be limitations to this approach. Though it could also be developed and enhanced — such as if it gets more fully baked into the platform.

For now it looks like WhatsApp is testing the water and trying to gather more data to shape a more robust response.

The most obvious issue with the tipline is it requires a message recipient to request a check — an active step that means the person must know about the fact-check service, have the number available in their contacts, and trust the judgements of those running it.

Many WhatsApp users will fall outside those opt-in bounds.

It also doesn’t take much effort to imagine purveyors of malicious rumors spreading fresh fakes claiming the fact-checks/checkers are biased or manipulated to try to turn WhatsApp users against it.

This is likely why local grassroots political organizations are also being encouraged to submit any rumors they see circulating across the different regions during the election period. And why WhatsApp is talking about the need for collective action to combat the disinformation problem.

It will certainly need engagement across the political spectrum to counter any bias charges and plug gaps resulting from limited participation by WhatsApp users themselves.

How information on debunked fakes can be credibly and widely fed back to Indian voters in a way that broadly reaches the electorate is what’s really key though.

There’s no suggestion, here and now, that’s going to happen via WhatsApp itself — only those who request a check are set to get a response.

Although that could change in future. But, equally, the company may be wary of being seen to accept a role in  centralized distribution of (even fake) political propaganda. That way more accusations of bias likely lie.

In recent years Facebook has taken out adverts in traditional India media to warn about fakes. It has also experimented with other tactics to try to combat damaging WhatsApp rumors — such as using actors to role-plays fakes in public to warn against false messages.

So the company looks to be hoping to develop a multi-stakeholder, multi-format information network off of its own platform to help get the message out about fakes spreading on WhatsApp.

Albeit, that’s clearly going to take time and effort. It’s also still not clear whether it will be effective vs an app that’s always on hand and capable of feeding in fresh fakes. 

The tipline also, inevitably, looks slow and painstaking beside the wildfire spread of digital fakes. And it’s not clear how much of a check on spread and amplification it can offer in this form. Certainly initially — given the fact-checking process itself necessarily takes time.

While a startup, even one that’s being actively supported by WhatsApp, is unlikely to have the resources to speedily fact-check the volume of fakes that will be distributed across such a large market, fuelled by election interests. Yet timely intervention is critical to prevent fakes going viral.

So, again, this initiative looks unlikely to stop the majority of bogus WhatsApp messages from being swallowed and shared. But the data-set derived from the research project which underpins the tipline may help the company fashion a more responsive and proactive approach to contextualizing and debunking malicious rumors in future.

Proto says it plans to submit its learnings to the International Center for Journalists to help other organizations learn from its efforts.

The Economic Times also quotes Fergus Bell, founder and CEO of Dig Deeper Media, suggesting the research will help create “global benchmarks” for those wishing to tackle misinformation in their own markets.

In the meantime, though, the votes go on.

Twitter took over a user’s account and joked about reading their DMs

At a time when tech giants have come under fire for failing to protect the private data of their users, Twitter took over one of its user’s accounts for fun and then tweeted jokes about reading the account’s private messages. The account, to be clear, was willingly volunteered for this prank by social media consultant Matt Navarra, who’s well-known in some Twitter circles for being among the first to spot new features on social media platforms like Twitter and Facebook.

In fact, TechCrunch itself has credited Navarra on a number of occasions for his tweets about features like Twitter’s new camera, Facebook’s “time spent” dashboard, Facebook’s “Explore” feed, Instagram’s “Do Not Disturb” setting, and more. Several other tech news sites have done the same, which means Navarra’s private messages (direct messages, aka DM’s) probably included a lot of conversations between himself and various reporters.

He’s also regularly tipped off about upcoming features or those in testing on sites like Twitter. One could assume he has regular conversations with his network of tipsters through DM’s, as well.

Initially, we believed the whole “account takeover” was just a joke – perhaps a case of Navarra poking fun at himself and his own obsession with social media. After all, “takeovers” are a common social media stunt these days, particularly on Instagram Stories. But they usually involve an individual tweeting for a brand – not a brand tweeting for an individual.

Navarra had the idea on Monday, and tweeted out a call for someone to run his account for a day.

He tells TechCrunch he had a tragic incident in his family, and offered the chance for someone else to tweet as him for the day so he could take a day away from Twitter. He also thought it could be fun. (Twitter tells us he remained logged in while the company was tweeting from his account, however.)

Navarra says he was surprised that Twitter volunteered for the job, and he agreed to give them control. Most of his followers – fellow social media enthusiasts – were excited and amused about the plan, which they touted as “epic,” “gold,” and a “great idea!

Navarra on Tuesday tweeted out photos of himself handing over his account key to Twitter in a DM thread.

On Tuesday, Twitter began tweeting as Navarra. This mostly involved some gentle roasting – like tweets about muting people asking for an “edit” button, and other nonsense. Twitter said then it was going to tweet out some of Navarra’s drafts, and posted things like “who has a Google Wave code?” and something about BBM, among other things.

But other jokes were less funny. Twitter said it was reading Navarra’s DMs, for example.

(At the time of posting, these embedded tweets were posted from “Tweet Navarra” as Twitter temporarily changed the account name while it was tweeting as Matt. But it’s been since changed back, so these embeds show the current account name, “Matt Navarra.”)

The company then posted a screenshot of his Direct Message inbox to poke fun at the fact that he had DM’d with an account called “Satan,” in one incident.

Navarra played along, joking from his new account for the day @realmattnavarra for Twitter to “ignore that DM from Zuck.”

While I personally had not DM’d Navarra anything compromising, I can’t speak for everyone who had ever messaged him. Even if Navarra had signed up to have his account taken over, those he messaged with had not volunteered to have their privacy violated. And though my conversations with him were innocuous, it was disconcerting to know that my message history with a private individual was accessible by someone at Twitter.

Reached for comment, Navarra claims his “DMs were all deleted” before Twitter entered his account. Unfortunately, there’s no way to verify this as DM deletion on Twitter is one-sided. That means that even if he deleted the DMs, the person who sent them could still view them in their own inbox.

It also appears from the screenshot Twitter posted (above) that the entire inbox hadn’t been wiped.

At the end of the day, Navarra may have been misguided with this stunt – perhaps he should have first demonstrated that he had cleaned out his inbox by posting a tweet of it being empty – but he is not a public social media company. It’s completely nuts that Twitter thought this was a funny idea.

Whether or not Twitter actually saw private conversations, it’s bad optics for the company to take over a user’s account for a lark then joke about violating users’ privacy at a time when tech giants like Facebook and Google are under threat of increased regulations for not taking care of users’ private data.

Twitter did not provide a comment, but confirmed it logged into Navarra’s account for a few hours for the takeover in the hopes of starting fun conversations with his followers.

Snap is under NDA with UK Home Office discussing how to centralize age checks online

Snap is under NDA with the UK’s Home Office as part of a working group tasked with coming up with more robust age verification technology that’s able to robustly identify children online.

The detail emerged during a parliamentary committee hearing as MPs in the Department for Digital, Culture, Media and Sport (DCMS) questioned Stephen Collins, Snap’s senior director for public policy international, and Will Scougal, director of creative strategy EMEA.

A spokesman in the Home Office press office hadn’t immediately heard of any discussions with the messaging company on the topic of age verification. But we’ll update this story with any additional context on the department’s plans if more info is forthcoming.

Under questioning by the committee Snap conceded its current age verification systems are not able to prevent under 13 year olds from signing up to use its messaging platform.

The DCMS committee’s interest here is it’s running an enquiry into immersive and addictive technologies.

Snap admitted that the most popular means of signing up to its app (i.e. on mobile) is where its age verification system is weakest, with Collins saying it had no ability to drop a cookie to keep track of mobile users to try to prevent repeat attempts to get around its age gate.

But he emphasized Snap does not want underage users on its platform.

“That brings us no advantage, that brings us no commercial benefit at all,” he said. “We want to make it an enjoyable place for everybody using the platform.”

He also said Snap analyzes patterns of user behavior to try to identify underage users — investigating accounts and banning those which are “clearly” determined not to be old enough to use the service.

But he conceded there’s currently “no foolproof way” to prevent under 13s from signing up.

Discussing alternative approaches to verifying kids’ age online the Snap policy staffer agreed parental consent approaches are trivially easy for children to circumvent — such as by setting up spoof email accounts or taking a photo of a parent’s passport or credit card to use for verification.

Social media company Facebook is one such company that relies a ‘parental consent’ system to ‘verify’ the age of teen users — though, as we’ve previously reported, it’s trivially easy for kids to workaround.

“I think the most sustainable solution will be some kind of central verification system,” Collins suggested, adding that such a system is “already being discussed” by government ministers.

“The home secretary has tasked the Home Office and related agencies to look into this — we’re part of that working group,” he continued.

“We actually met just yesterday. I can’t give you the details here because I’m under an NDA,” Collins added, suggesting Snap could send the committee details in writing.

“I think it’s a serious attempt to really come to a proper conclusion — a fitting conclusion to this kind of conundrum that’s been there, actually, for a long time.”

“There needs to be a robust age verification system that we can all get behind,” he added.

The UK government is expected to publish a White Paper setting out its policy ideas for regulating social media and safety before the end of the winter.

The detail of its policy plans remain under wraps so it’s unclear whether the Home Office intends to include setting up a centralized system of online age verification for robustly identifying kids on social media platforms as part of its safety-focused regulation. But much of the debate driving the planned legislation has fixed on content risks for kids online.

Such a step would also not be the first time UK ministers have pushed the envelop around online age verification.

A controversial system of age checks for viewing adult content is due to come into force shortly in the UK under the Digital Economy Act — albeit, after a lengthy delay. (And ignoring all the hand-wringing about privacy and security risks; not to mention the fact age checks will likely be trivially easy to dodge by those who know how to use a VPN etc, or via accessing adult content on social media.)

But a centralized database of children for age verification purposes — if that is indeed the lines along which the Home Office is thinking — sounds rather closer to Chinese government Internet controls.

Given that, in recent years, the Chinese state has been pushing games companies to age verify users to enforce limits on play time for kids (also apparently in response to health concerns around video gaming addiction).

The UK has also pushed to create centralized databases of web browsers’ activity for law enforcement purposes, under the 2016 Investigatory Powers Act. (Parts of which it’s had to rethink following legal challenges, with other legal challenges ongoing.)

In recent years it has also emerged that UK spy agencies maintain bulk databases of citizens — known as ‘bulk personal datasets‘ — regardless of whether a particular individual is suspected of a crime.

So building yet another database to contain children’s ages isn’t perhaps as off piste as you might imagine for the country.

Returning to the DCMS committee’s enquiry, other questions for Snap from MPs included several critical ones related to its ‘streaks’ feature — whereby users who have been messaging each other regularly are encouraged not to stop the back and forth.

The parliamentarians raised constituent and industry concerns about the risk of peer pressure being piled on kids to keep the virtual streaks going.

Snap’s reps told the committee the feature is intended to be a “celebration” of close friendship, rather than being intentionally designed to make the platform sticky and so encourage stress.

Though they conceded users have no way to opt out of streak emoji appearing.

They also noted they have previously reduced the size of the streak emoji to make it less prominent.

But they added they would take concerns back to product teams and re-examine the feature in light of the criticism.

You can watch the full committee hearing with Snap here.