All posts in “Matt Hancock”

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.

Instagram’s Adam Mosseri to meet UK health secretary over suicide content concerns

The still fresh-in-post boss of Instagram, Adam Mosseri, has been asked to meet the UK’s health secretary, Matt Hancock, to discuss the social media platform’s handling of content that promotes suicide and self harm, the BBC reports.

Mosseri’s summons follows an outcry in the UK over disturbing content being recommended to vulnerable users of Instagram, following the suicide of a 14 year old schoolgirl, Molly Russell, who killed herself in 2017.

After her death, Molly’s family discovered she had been following a number of Instagram accounts that encouraged self-harm. Speaking to the BBC last month Molly’s father said he did not doubt the platform had played a role in her decision to kill herself.

Writing in the Telegraph newspaper today, Mosseri makes direct reference to Molly’s tragedy, saying he has been “deeply moved” by her story and those of other families affected by self-harm and suicide, before going on to admit that Instagram is “not yet where we need to be on the issues”.

“We rely heavily on our community to report this content, and remove it as soon as it’s found,” he writes, conceding that the platform has offloaded the lion’s share of responsibility for content policing onto users thus far. “The bottom line is we do not yet find enough of these images before they’re seen by other people,” he admits.

Mossari then uses the article to announce a couple of policy changes in response to the public outcry over suicide content.

Beginning this week, he says Instagram will begin adding “sensitivity screens” to all content it reviews which “contains cutting”. “These images will not be immediately visible, which will make it more difficult for people to see them,” he suggests.

Though that clearly won’t stop fresh uploads from being distributed unscreened. (Nor prevent young and vulnerable users clicking to view disturbing content regardless.)

Mossari justifies Instagram’s decision not to blanket-delete all content related to self-harm and/or suicide by saying its policy is to “allow people to share that they are struggling even if that content no longer shows up in search, hashtags or account recommendations”.

We’ve taken a hard look at our work and though we have been focused on the individual who is vulnerable to self harm, we need to do more to consider the effect of self-harm images on those who may be inclined to follow suit,” he continues. “This is a difficult but important balance to get right. These issues will take time, but it’s critical we take big steps forward now. To that end we have started to make changes.”

Another policy change he reveals is that Instagram will stop its algorithms actively recommending additional self-harm content to vulnerable users. “[F]or images that don’t promote self-harm, we let them stay on the platform, but moving forward we won’t recommend them in search, hashtags or the Explore tab,” he writes.

Unchecked recommendations have opened Instagram up to accusations that it essentially encourages depressed users to self-harm (or even suicide) by pushing more disturbing content into their feeds once they start to show an interest.

So putting limits on how algorithms distribute and amplify sensitive content is an obvious and overdue step — but one that’s taken significant public and political attention for the Facebook -owned company to make.

Last year the UK government announced plans to legislate on social media and safety, though it has yet to publish details of its plans (a white paper setting out platforms’ responsibilities is expected in the next few months). But just last week a UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect minors.

In a statement given to the BBC, the Department for Digital, Culture, Media and Sport confirmed such a legal duty remains on the table. “We have heard calls for an internet regulator and to place a statutory ‘duty of care’ on platforms, and are seriously considering all options,” it said.

There’s little doubt that the prospect of safety-related legislation incoming in a major market for the platform — combined with public attention on Molly’s tragedy — has propelled the issue to the top of the Instagram chief’s inbox.

Mossari writes now that Instagram began “a comprehensive review last week” with a focus on “supporting young people”, adding that the revised approach entails reviewing content policies, investing in technology to “better identify sensitive images at scale” and applying measures to make such content “less discoverable”. 

He also says it’s “working on more ways” to link vulnerable users to third party resources, such as by connecting them with organisations it already works with on user support, such as Papyrus and Samaritans. But he concedes the platform needs to “do more to consider the effect of self-harm images on those who may be inclined to follow suit” — not just on the poster themselves. 

“This week we are meeting experts and academics, including Samaritans, Papyrus and Save.org, to talk through how we answer these questions,” he adds. “We are committed to publicly sharing what we learn. We deeply want to get this right and we will do everything we can to make that happen.”

We’ve reached out to Facebook, Instagram’s parent, for further comment.

One way user-generated content platforms could support the goal of better understanding impacts of their own distribution and amplification algorithms is to provide high quality data to third party researchers so they can interrogate platform impacts.

That was another of the recommendations from the UK’s science and technology committee last week. But it’s not yet clear whether Mossari’s commitment to sharing what Instagram learns from meetings with academics and experts will also result in data flowing the other way — i.e. with the proprietary platform sharing its secrets with experts so they can robustly and independently study social media’s antisocial impacts.

Recommendation algorithms lie at center of many of social media’s perceived ills — and the problem scales far beyond any one platform. YouTube’s recommendation engines have, for example, also long been criticized for having a similar ‘radicalizating’ impact — such as by pushing viewers of conservative content to far more extreme/far right and/or conspiracy theorist views.

With the huge platform power of tech giants in the spotlight, it’s clear that calls for increased transparency will only grow — unless or until regulators make access to and oversight of platforms’ data and algorithms a legal requirement.

Fake news ‘threat to democracy’ report gets back-burner response from UK gov’t

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online.

The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following a multi-month investigation into the impact of so-called ‘fake news’ on democratic processes.

Though it has suggested the terms ‘misinformation’ and ‘disinformation’ be used instead, to better pin down exact types of problematic inauthentic content — and on that at least the government agrees. But just not on very much else. At least not yet.

Among around 50 policy suggestions in the interim report — which the committee put out quickly exactly to call for “urgent action” to ‘defend democracy’ — it urged the government to put forward proposals for an education levy on social media.

But in its response, released by the committee today, the government writes that it is “continuing to build the evidence base on a social media levy to inform our approach in this area”.

“We are aware that companies and charities are undertaking a wide range of work to tackle online harms and would want to ensure we do not negatively impact existing work,” it adds, suggesting it’s most keen not to be accused of making a tricky problem worse.

Earlier this year the government did announce plans to set up a dedicated national security unit to combat state-led disinformation campaigns, with the unit expected to monitor social media platforms to support faster debunking of online fakes — by being able to react more quickly to co-ordinated interference efforts by foreign states.

But going a step further and requiring social media platforms themselves to pay a levy to fund domestic education programs — to arm citizens with critical thinking capabilities so people can more intelligently parse content being algorithmically pushed at them — is not, apparently, forming part of government’s current thinking.

Though it is not taking the idea of some form of future social media tax off the table entirely, as it continues seeking ways to make big tech pay a fairer share of earnings into the public purse, also noting in its response: “We will be considering any levy in the context of existing work being led by HM Treasury in relation to corporate tax and the digital economy.”

As a whole, the government’s response to the DCMS committee’s laundry list of policy recommendations around the democratic risks of online disinformation can be summed up in a word as ‘cautious’ — with only three of the report’s forty-two recommendations being accepted outright, as the committee tells it, and four fully rejected.

Most of the rest are being filed under ‘come back later — we’re still looking into it’.

So if you take the view that ‘fake news’ online has already had a tangible and worrying impact on democratic debate the government’s response will come across as underwhelming and lacking in critical urgency. (Though it’s hardly alone on that front.)

The committee has reacted with disappointment — with chair Damian Collins dubbing the government response “disappointing and a missed opportunity”, and also accusing ministers of hiding behind ‘ongoing investigations’ to avoid commenting on the committee’s call that the UK’s National Crime Agency urgently carry out its own investigation into “allegations involving a number of companies”.

Earlier this month Collins also called for the Met Police to explain why they had not opened an investigation into Brexit-related campaign spending breaches.

It has also this month emerged that the force will not examine claims of Russian meddling in the referendum.

Meanwhile the political circus and business uncertainty triggered by the Brexit vote goes on.

Holding pattern

The bulk of the government’s response to the DCMS interim report entails flagging a number of existing and/or ongoing consultations and reviews — such as the ‘Protecting the Debate: Intimidating, Influence and Information‘ consultation, which it launched this summer.

But by saying it’s continuing to gather evidence on a number of fronts the government is also saying it does not feel it’s necessary to rush through any regulatory responses to technology-accelerated, socially divisive/politically sensitive viral nonsense — claiming also that it hasn’t seen any evidence that malicious misinformation has been able to skew genuine democratic debate on the domestic front.

It’ll be music to Facebook’s ears given the awkward scrutiny the company has faced from lawmakers at home and, indeed, elsewhere in Europe — in the wake of a major data misuse scandal with a deeply political angle.

The government also points multiple times to a forthcoming oversight body which is in the process of being established — aka the Centre for Data Ethics and Innovation — saying it expects this to grapple with a number of the issues of concern raised by the committee, such as ad transparency and targeting; and to work towards agreeing best practices in areas such as “targeting, fairness, transparency and liability around the use of algorithms and data-driven technologies”.

Identifying “potential new regulations” is another stated role for the future body. Though given it’s not yet actively grappling with any of these issues the UK’s democratically concerned citizens are simply being told to wait.

“The government recognises that as technological advancements are made, and the use of data and AI becomes more complex, our existing governance frameworks may need to be strengthened and updated. That is why we are setting up the Centre,” the government writes, still apparently questioning whether legislative updates are needed — this in a response to the committee’s call, informed by its close questioning of tech firms and data experts, for an oversight body to be able to audit “non-financial” aspects of technology companies (including security mechanism and algorithms) to “ensure they are operating responsibly”.

“As set out in the recent consultation on the Centre, we expect it to look closely at issues around the use of algorithms, such as fairness, transparency, and targeting,” the government continues, noting that details of the body’s initial work program will be published in the fall — when it says it will also put out its response to the aforementioned consultation.

It does not specify when the ethics body will be in any kind of position to hit this shifty ground running. So again there’s zero sense the government intends to act at a pace commensurate with the fast-changing technologies in question.

Then, where the committee’s recommendations touch on the work of existing UK oversight bodies, such as Competition and Markets Authority, the ICO data watchdog, the Electoral Commission and the National Crime Agency, the government dodges specific concerns by suggesting it’s not appropriate for it to comment “on independent bodies or ongoing investigations”.

Also notable: It continues to reject entirely the idea that Russian-backed disinformation campaigns have had any impact on domestic democratic processes at all — despite public remarks by prime minister Theresa May  last year generally attacking Putin for weaponizing disinformation for election interference purposes.

Instead it writes:

We want to reiterate, however, that the Government has not seen evidence of successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. But we are not being complacent and the Government is actively engaging with partners to develop robust policies to tackle this issue.

Its response on this point also makes no reference of the extensive use of social media platforms to run political ads targeting the 2016 Brexit referendum.

Nor does it make any note of the historic lack of transparency of such ad platforms. Which means that it’s simply not possible to determine where all the ad money came from to fund digital campaigning on domestic issues — with Facebook only just launching a public repository of who is paying for political ads and badging them as such in the UK, for example.

The elephant in the room is of course that ‘lack of evidence’ is not necessarily evidence of a lack of success, especially when it’s so hard to extract data from opaque adtech platforms in the first place.

Moreover, just this week fresh concerns have been raised about how platforms like Facebook are still enabling dark ads to target political messages at citizens — without it being transparently clear who is actually behind and paying for such campaigns…

In turn triggering calls from opposition MPs for updates to UK election law…

Yet the government, busily embroiled as it still is with trying to deliver some kind of Brexit outcome, is seemingly unconcerned by all this unregulated, background ongoing political advertising.

It also directly brushes off the committee’s call for it to state how many investigations are currently being carried out into Russian interference in UK politics, saying only that it has taken steps to ensure there is a “coordinated structure across all relevant UK authorities to defend against hostile foreign interference in British politics, whether from Russia or any other State”, before reiterating: “There has, however, been no evidence to date of any successful foreign interference.”

This summer the Electoral Commission found that the official Vote Leave campaign in the UK’s in/out EU referendum had broken campaign spending rules — with social media platforms being repurposed as the unregulated playing field where election law could be diddled at such scale. That much is clear.

The DCMS committee had backed the Commission’s call for digital imprint requirements for electronic campaigns to level the playing field between digital and print ads.

However the government has failed to back even that pretty uncontroversial call, merely pointing again to a public consultation (which ends today) on proposed changes to electoral law. So it’s yet more wait and see.

The committee is also disappointed about the lack of government response to its call for the Commission to establish a code for advertising through social media during election periods; and its recommendation that “Facebook and other platforms take responsibility for the way their platforms are used” — noting also the government made “no response to Facebook’s failure to respond adequately to the Committee’s inquiry and Mark Zuckerberg’s reluctance to appear as a witness“. (A reluctance that really enraged the committee.)

In a statement on the government’s response, committee chair Damian Collins writes: “The government’s response to our interim report on disinformation and ‘fake news’ is disappointing and a missed opportunity. It uses other ongoing investigations to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

“We need to see a more coordinated approach across government to combat campaigns of disinformation being organised by Russian agencies seeking to disrupt and undermine our democracy. The government’s response gives us no real indication of what action is being taken on this important issue.”

Collins finds one slender crumb of comfort, though, that the government might have some appetite to rule big tech.

After the committee had called for government to “demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity”, the government writes that it: “has made it clear to Facebook, and other social media companies, that they must do more to remove illegal and harmful content”; and noting also that its forthcoming Online Harms White Paper will include “a range of policies to tackle harmful content”.

“We welcome though the strong words from the Government in its demand for action by Facebook to tackle the hate speech that has contributed to the ethnic cleansing of the Rohingya in Burma,” notes Collins, adding: “We will be looking for the government to make progress on these and other areas in response to our final report which will be published in December.

“We will also be raising these issues with the Secretary of State for DCMS, Jeremy Wright, when he gives evidence to the Committee on Wednesday this week.”

(Wright being the new minister in charge of the UK’s digital brief, after Matt Hancock moved over to health.)

We’ve reached out to Facebook for comment on the government’s call for a more robust approach to illegal hate speech.

Last week the company announced it had hired former UK deputy prime minister, Nick Clegg, to be its new head of global policy and comms — apparently signalling a willingness to pay a bit more attention to European regulators.

Digital minister’s app lands on data watchdog’s radar after privacy cock-up


UK digital minister Matt Hancock, who’s currently busy with legislative updates to the national data protection framework, including to bring it in line with the EU’s strict new privacy regime, nonetheless found time to launch an own-brand social networking app this week.

The eponymously titled: Matt Hancock MP App.

To cut a long story short, the Matt app quickly ran into a storm of criticism for displaying an unfortunately lax attitude to privacy and data protection. Such as pasting in what appeared to be a very commercially minded privacy policy — which iOS users couldn’t even see prior to agreeing to it in order to download the app… [Insert facepalm emoji of choice]

In the words of one privacy consultant, who quickly raised concerns via Twitter: “You’d think the Digital Minister and one responsible for data protection package would get privacy right.”

Well — news just in! — the UK’s data protection watchdog isn’t entirely sure about that latter point, because it’s now looking into the app’s operation after privacy concerns were raised.

“We are checking reports about the operation of this app and have seen other similar examples of such concerns in apps as they are developed. So to help developers, we produced specific guidance on privacy in mobile apps,” an ICO spokesperson told TechCrunch in response to questions about the Matt app.

“The Data Protection Act exists to protect individuals’ privacy. Anyone developing an app needs to comply with data protection laws, ensuring privacy is at the forefront of their design,” the spokesperson added, pointing to the agency’s contact page as a handy resource for “anybody with concerns about how their personal data has been handled”.

(For the full lowdown on the Matt Hancock privacy snafu, I suggest reading The Register‘s gloriously titled report: What a Hancock-up: MP’s social network app is a privacy disaster.

This forensic Twitter thread, by the aforementioned consultant, @PrivacyMatters, is also a great exploration of the myriad areas where Matt Hancock’s app appears to be messing up in data protection T&C terms.)

Here’s a few screenshots of the app for the curious…

Of course the minister didn’t intend to generate his own personal privacy snafu.

He intended the Matt Hancock App to be a place for people in his West Suffolk constituency to keep up on news about Matt Hancock, MP.

Among the touted “Core benefits for Constituents” are:

  • Never miss out on local matters via private networks
  • A safe, trusted, environment where abuse is not tolerated and user data is not exploited

But Hancock outsourced the app’s development to a UK company called Disciple Media, which builds so-called “mobile-first community platforms” for third parties — including musicians and social media influencers.

And whose privacy policy is replete with circumspect words like “may” and “including” — making it about as clear as mud what exactly the company (and indeed what Matt Hancock MP) will be doing with Matt Hancock App users’ personal data.

Here’s a sample problematic para from the app’s privacy policy (emphasis ours):

when you sign up [to?] the App you provide consent so that we may disclose your personal information to the Publisher, the Publisher’s management company, agent, rights image company, the Publisher’s record label or publisher (as applicable) and any other third parties, for use in conjunction with additional user promotions or offers they may run from time to time or in relation to the sale of other goods and services. You may unsubscribe from such promotions or offers or communications at any time by following the instructions set out in such promotion or offer or communication;

If you’re wondering whether Hancock has also started his own rock band or record label; spoiler — as far as we’re aware he hasn’t. Rather, as we understand it, the policy issued with the app was originally created for musician clients which Disciple more often works with (one example on that front: The Rolling Stones).

We also understand the privacy policy was uploaded in error to the Matt app, according to sources familiar with the matter, and it is in the process of being reviewed for possible amendments.

Tapping around in the app itself, other aspects also point to it having been rushed out — for example, expanding comments didn’t seem to work for some of the posts we tried. And three dots in the upper corner of photos occasionally does nothing; occasionally asks if you want to ‘turn off notifications’; and occasionally offers both choices; plus a third option of asking if you want to report a post.

Meanwhile, as others have pointed out, by calling the app after the man himself users get the unfortunate notification that “Matt Hancock would like to access your photos” if they choose to upload an image. Awkward to say the least.

Although it’s less clear whether reports that the app might also be breaching iOS rules by accessing users’ photos even if they’ve denied camera roll access stand up to scrutiny as iOS 11 does let users grant one-time access to a photo.

Hancock’s parliamentary office is deferring all awkward questions about the Matt Hancock App to Disciple. We know because we rang and they redirected us to company’s contact details.

We wanted to ask Hancock’s people what user data his office is harvesting, via his own-brand app, and what the data will be used for. And why Hancock decided to build the app with Disciple (which the app’s press release specifies hasn’t been paid; the company is seemingly providing the service as a donation in kind — presumably for the hopes of associated publicity, so, er, careful what you wish for).

We also wanted to know what Hancock thought he could achieve by launching an own-brand app which isn’t already possible to do with pre-existing communication tools (and via constituency surgeries).

And whether the app was vetted by any government agencies prior to launch — given Hancock’s position as a sitting minister, and the potential for some wider reputational damage on account of the unfortunate juxtaposition with his ministerial portfolio.

Eventually a different Hancock staffer send us this statement: “This app is ICO registered and GDPR compliant. It is consistent with measures in the Data Protection Bill currently before Parliament. And is App Store certified by Apple, using standard Apple technology.”

Re: GDPR, we suggest the minister reads our primer because we’re rather less confident than he apparently is that his app, as is, under this current privacy policy and structure, would pass muster under the new EU-wide standard (which comes into force in May).

As regards the why of the Matt app, the staffer sent us a line from Matt’s weekly newsletter — where he writes: “Working with a brilliant British startup called Disciple Media, I’ve launched this app to build a safe, moderated, digital community where my West Suffolk constituents and I can discuss the issues that matter to them.”

Hancock’s office did not respond to our questions about the exact data they are collecting and for what specific purposes (pro tip: That’s basically a GDPR requirement guys!).

But we’ll update this post if the minister delivers any further insights on the digital activity being done under (and in) his name. (As an aside, an email we sent to his constituency email address also bounced back with a fatal delivery error. Digital credibility score at his point: Distressingly low.)

Meanwhile, Disciple Media has so far declined to provide a public response to our questions — though they have promised a statement. Which we’ll drop in here when/if it lands.

The company is in the process of pivoting its business model from a revenue share arrangement to a SaaS monthly subscription — which a spokesman describes as “more ‘easy Squarespace for mobile/mobile web communities’ than ‘social media’”.

So — in theory at least — the business should be heading away from the need to lean on the data slurping of app users’ personal information to power marketing-generated revenues to keep the money rolling in. At least if it gets enough paying monthly customers (Hancock not being one of them).

We’re told it has relied on private investment thus far but is also actively seeking to raise VC.