All posts in “Matt Hancock”

Social media firms agree to work with UK charities to set online harm boundaries

Social media giants, including Facebook -owned Instagram, have agreed to financially contribute to UK charities to fund them making recommendations that the government hopes will speed up decisions about removing content that promotes suicide/self-harm or eating disorders on their platforms.

The development follows the latest intervention by health secretary Matt Hancock, who met with representatives from the Facebook, Instagram, Twitter, Pinterest, Google and others yesterday to discuss what they’re doing to tackle a range of online harms.

“Social media companies have a duty of care to people on their sites. Just because they’re global doesn’t mean they can be irresponsible,” he said today.

“We must do everything we can to keep our children safe online so I’m pleased to update the house that as a result of yesterday’s summit, the leading global social media companies have agreed to work with experts… to speed up the identification and removal of suicide and self-harm content and create greater protections online.”

However he failed to get any new commitments from the companies to do more to tackle anti-vaccination misinformation — despite saying last week that he would be heavily leaning on the tech giants to remove anti-vaccination misinformation, warning it posed a serious risk to public health.

Giving an update on his latest social media moot in parliament this afternoon, Hancock said the companies had agreed to do more to address a range of online harms — while emphasizing there’s more for them to do, including addressing anti-vaccination misinformation.

“The rise of social media now makes it easier to spread lies about vaccination so there is a special responsibility on the social media companies to act,” he said, noting that coverage for the measles, mumps and rubella vaccination in England decreased for the fourth year in a row last year — dropping to 91%.

There has been a rise in confirmed measles cases from 259 to 966 over the same period, he added.

With no sign of an agreement from the companies to take tougher action on anti-vaccination misinformation, Hancock was left to repeat their preferred talking point to MPs, segwaying into suggesting social media has the potential to be a “great force for good” on the vaccination front — i.e. if it “can help us to promote positive messages” about the public health value of vaccines.

For the two other online harm areas of focus, suicide/self-harm content and eating disorders, suicide support charity Samaritans and eating disorder charity Beat were named as the two U.K. organizations that would be working with the social media platforms to make recommendations for when content should and should not be taken down.

“[Social media firms will] not only financially support the Samaritans to do the work but crucially Samaritans’ suicide prevention experts will determine what is harmful and dangerous content, and the social media platforms committed to either remove it or prevent others from seeing it and help vulnerable people get the positive support they need,” said Hancock.

“This partnership marks for the first time globally a collective commitment to act, to build knowledge through research and insights — and to implement real changes that will ultimately save lives,” he added.

The Telegraph reports that the value of the financial contribution from the social media platforms to the Samaritans for the work will be “hundreds of thousands” of pounds. And during questions in parliament MPs pointed out the amount pledged is tiny vs the massive profits commanded by the companies. Hancock responded that it was what the Samaritans had asked for to do the work, adding: “Of course I’d be prepared to go and ask for more if more is needed.”

The minister was also pressed from the opposition benches on the timeline for results from the social media companies on tackling “the harm and dangerous fake news they host”.

“We’ve already seen some progress,” he responded — flagging a policy change announced by Instagram and Facebook back in February, following a public outcry after a report about a UK schoolgirl whose family said she killed herself after being exposed to graphic self-harm content on Instagram.

“It’s very important that we keep the pace up,” he added, saying he’ll be holding another meeting with the companies in two months to see what progress has been made.

“We’ll expect… that we’ll see further action from the social media companies. That we will have made progress in the Samaritans being able to define more clearly what the boundary is between harmful content and content which isn’t harmful.

“In each of these areas about removing harms online the challenge is to create the right boundary in the appropriate place… so that the social media companies don’t have to define what is and isn’t socially acceptable. But rather we as society do.”

In a statement following the meeting with Hancock, a spokesperson for Facebook and Instagram said: “We fully support the new initiative from the government and the Samaritans, and look forward to our ongoing work with industry to find more ways to keep people safe online.”

The company also noted that it’s been working with expert organisations, including the Samaritans, for “many years to find more ways to do that” — suggesting it’s quite comfortable playing the familiar political game of ‘more of the same’.

That said, the UK government has made tackling online harms a stated policy priority — publishing a proposal for a regulatory framework intended to address a range of content risks earlier this month, when it also kicked off a 12-week public consultation.

Though there’s clearly a long road ahead to agree a law that’s enforceable, let alone effective.

Hancock resisted providing MPs with any timeline for progress on the planned legislation — telling parliament “we want to genuinely consult widely”.

“This isn’t really issue of party politics. It’s a matter of getting it right so that society decides on how we should govern the Internet, rather than the big Internet companies making those decisions for themselves,” he added.

The minister was also asked by the shadow health secretary, Jonathan Ashworth, to guarantee that the legislation will include provision for criminal sentences for executives for serious breaches of their duty of care. But Hancock failed to respond to the question. 

UK health minister leans on social media platforms to delete anti-vax content

Social media-fuelled anti-vaxxer propaganda is the latest online harm the U.K. government is targeting.

Speaking on BBC Radio 4’s Today program this morning health secretary Matt Hancock said he will meet with representatives from social media platforms on Monday to pressure them into doing more to prevent false information about the safety of vaccinations from being amplified by their platforms.

“I’m seeing them on Monday to require that they do more to take down wrong — well lies essentially — that are promoted on social media about the impact of vaccination,” he said, when asked about a warning by a U.K. public health body about the risk of a public health emergency being caused by an increase in the number of British children who have not received the measles vaccination.

“Vaccination is safe; it’s very, very important for the public health, for everybody’s health and we’re going to tackle it.”

The head of NHS England also warned last month about anti-vaccination messages gaining traction on social media.

“We need to tackle this risk in people not vaccinating,” Hancock added. “One of the things I’m particularly worried about is the spread of anti-vaccination messages online. I’ve called in the social media companies like we had to for self-harming imagery a couple of months ago.”

Hancock, who between 2016 and 2018 served as the U.K.’s digital minister, prior to taking over the health brief, held a similar meeting with the boss of Instagram earlier this year.

That followed a public outcry over suicide content spreading on Instagram after a British schoolgirl was reported to have been encouraged to killed herself by viewing graphic content on the Facebook -owned platform.

Instagram subsequently announced a policy change saying it would remove graphic images of self harm and demote non-graphic self-harm images so they don’t show up in searches, relevant hashtags or the explore tab.

But it remains to be seen whether platforms will be as immediately responsive to amped up political pressure to scrub anti-vaccination content entirely given the level of support this kind of misinformation can attract among social media users.

Earlier this year Facebook said it would downrank anti-vax content in the News Feed and hide it on Instagram in an effort to minimize the spread of vaccination misinformation.

It also said it would point users toward “authoritative” vaccine-related information — i.e. information that’s been corroborated by the health and scientific establishment.

But deleting such content entirely was not part of Facebook’s announced strategy.

We’ve reached out to Facebook for any response to Hancock’s comments.

In the longer term social media platforms operating in the U.K. could face laws that require them to remove content deemed to pose a risk to public health if ordered to by a dedicated regulator, as a result of a wide-ranging government plan to tackle a range of online harms.

Earlier this month the U.K. government set out a broad policy plan for regulating online harms.

The Online Harms Whitepaper proposes to put a mandatory duty of care on platforms to take reasonable steps to protect users from a range of harms — including those linked to the spread of disinformation.

It also proposes a dedicated, overarching regulator to oversee internet companies to ensure they meet their responsibilities.

The government is currently running a public consultation on the proposals, which ends July 1, after which it says it will set out any next actions as it works on developing draft legislation.

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.

Instagram’s Adam Mosseri to meet UK health secretary over suicide content concerns

The still fresh-in-post boss of Instagram, Adam Mosseri, has been asked to meet the UK’s health secretary, Matt Hancock, to discuss the social media platform’s handling of content that promotes suicide and self harm, the BBC reports.

Mosseri’s summons follows an outcry in the UK over disturbing content being recommended to vulnerable users of Instagram, following the suicide of a 14 year old schoolgirl, Molly Russell, who killed herself in 2017.

After her death, Molly’s family discovered she had been following a number of Instagram accounts that encouraged self-harm. Speaking to the BBC last month Molly’s father said he did not doubt the platform had played a role in her decision to kill herself.

Writing in the Telegraph newspaper today, Mosseri makes direct reference to Molly’s tragedy, saying he has been “deeply moved” by her story and those of other families affected by self-harm and suicide, before going on to admit that Instagram is “not yet where we need to be on the issues”.

“We rely heavily on our community to report this content, and remove it as soon as it’s found,” he writes, conceding that the platform has offloaded the lion’s share of responsibility for content policing onto users thus far. “The bottom line is we do not yet find enough of these images before they’re seen by other people,” he admits.

Mossari then uses the article to announce a couple of policy changes in response to the public outcry over suicide content.

Beginning this week, he says Instagram will begin adding “sensitivity screens” to all content it reviews which “contains cutting”. “These images will not be immediately visible, which will make it more difficult for people to see them,” he suggests.

Though that clearly won’t stop fresh uploads from being distributed unscreened. (Nor prevent young and vulnerable users clicking to view disturbing content regardless.)

Mossari justifies Instagram’s decision not to blanket-delete all content related to self-harm and/or suicide by saying its policy is to “allow people to share that they are struggling even if that content no longer shows up in search, hashtags or account recommendations”.

We’ve taken a hard look at our work and though we have been focused on the individual who is vulnerable to self harm, we need to do more to consider the effect of self-harm images on those who may be inclined to follow suit,” he continues. “This is a difficult but important balance to get right. These issues will take time, but it’s critical we take big steps forward now. To that end we have started to make changes.”

Another policy change he reveals is that Instagram will stop its algorithms actively recommending additional self-harm content to vulnerable users. “[F]or images that don’t promote self-harm, we let them stay on the platform, but moving forward we won’t recommend them in search, hashtags or the Explore tab,” he writes.

Unchecked recommendations have opened Instagram up to accusations that it essentially encourages depressed users to self-harm (or even suicide) by pushing more disturbing content into their feeds once they start to show an interest.

So putting limits on how algorithms distribute and amplify sensitive content is an obvious and overdue step — but one that’s taken significant public and political attention for the Facebook -owned company to make.

Last year the UK government announced plans to legislate on social media and safety, though it has yet to publish details of its plans (a white paper setting out platforms’ responsibilities is expected in the next few months). But just last week a UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect minors.

In a statement given to the BBC, the Department for Digital, Culture, Media and Sport confirmed such a legal duty remains on the table. “We have heard calls for an internet regulator and to place a statutory ‘duty of care’ on platforms, and are seriously considering all options,” it said.

There’s little doubt that the prospect of safety-related legislation incoming in a major market for the platform — combined with public attention on Molly’s tragedy — has propelled the issue to the top of the Instagram chief’s inbox.

Mossari writes now that Instagram began “a comprehensive review last week” with a focus on “supporting young people”, adding that the revised approach entails reviewing content policies, investing in technology to “better identify sensitive images at scale” and applying measures to make such content “less discoverable”. 

He also says it’s “working on more ways” to link vulnerable users to third party resources, such as by connecting them with organisations it already works with on user support, such as Papyrus and Samaritans. But he concedes the platform needs to “do more to consider the effect of self-harm images on those who may be inclined to follow suit” — not just on the poster themselves. 

“This week we are meeting experts and academics, including Samaritans, Papyrus and, to talk through how we answer these questions,” he adds. “We are committed to publicly sharing what we learn. We deeply want to get this right and we will do everything we can to make that happen.”

We’ve reached out to Facebook, Instagram’s parent, for further comment.

One way user-generated content platforms could support the goal of better understanding impacts of their own distribution and amplification algorithms is to provide high quality data to third party researchers so they can interrogate platform impacts.

That was another of the recommendations from the UK’s science and technology committee last week. But it’s not yet clear whether Mossari’s commitment to sharing what Instagram learns from meetings with academics and experts will also result in data flowing the other way — i.e. with the proprietary platform sharing its secrets with experts so they can robustly and independently study social media’s antisocial impacts.

Recommendation algorithms lie at center of many of social media’s perceived ills — and the problem scales far beyond any one platform. YouTube’s recommendation engines have, for example, also long been criticized for having a similar ‘radicalizating’ impact — such as by pushing viewers of conservative content to far more extreme/far right and/or conspiracy theorist views.

With the huge platform power of tech giants in the spotlight, it’s clear that calls for increased transparency will only grow — unless or until regulators make access to and oversight of platforms’ data and algorithms a legal requirement.

Fake news ‘threat to democracy’ report gets back-burner response from UK gov’t

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online.

The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following a multi-month investigation into the impact of so-called ‘fake news’ on democratic processes.

Though it has suggested the terms ‘misinformation’ and ‘disinformation’ be used instead, to better pin down exact types of problematic inauthentic content — and on that at least the government agrees. But just not on very much else. At least not yet.

Among around 50 policy suggestions in the interim report — which the committee put out quickly exactly to call for “urgent action” to ‘defend democracy’ — it urged the government to put forward proposals for an education levy on social media.

But in its response, released by the committee today, the government writes that it is “continuing to build the evidence base on a social media levy to inform our approach in this area”.

“We are aware that companies and charities are undertaking a wide range of work to tackle online harms and would want to ensure we do not negatively impact existing work,” it adds, suggesting it’s most keen not to be accused of making a tricky problem worse.

Earlier this year the government did announce plans to set up a dedicated national security unit to combat state-led disinformation campaigns, with the unit expected to monitor social media platforms to support faster debunking of online fakes — by being able to react more quickly to co-ordinated interference efforts by foreign states.

But going a step further and requiring social media platforms themselves to pay a levy to fund domestic education programs — to arm citizens with critical thinking capabilities so people can more intelligently parse content being algorithmically pushed at them — is not, apparently, forming part of government’s current thinking.

Though it is not taking the idea of some form of future social media tax off the table entirely, as it continues seeking ways to make big tech pay a fairer share of earnings into the public purse, also noting in its response: “We will be considering any levy in the context of existing work being led by HM Treasury in relation to corporate tax and the digital economy.”

As a whole, the government’s response to the DCMS committee’s laundry list of policy recommendations around the democratic risks of online disinformation can be summed up in a word as ‘cautious’ — with only three of the report’s forty-two recommendations being accepted outright, as the committee tells it, and four fully rejected.

Most of the rest are being filed under ‘come back later — we’re still looking into it’.

So if you take the view that ‘fake news’ online has already had a tangible and worrying impact on democratic debate the government’s response will come across as underwhelming and lacking in critical urgency. (Though it’s hardly alone on that front.)

The committee has reacted with disappointment — with chair Damian Collins dubbing the government response “disappointing and a missed opportunity”, and also accusing ministers of hiding behind ‘ongoing investigations’ to avoid commenting on the committee’s call that the UK’s National Crime Agency urgently carry out its own investigation into “allegations involving a number of companies”.

Earlier this month Collins also called for the Met Police to explain why they had not opened an investigation into Brexit-related campaign spending breaches.

It has also this month emerged that the force will not examine claims of Russian meddling in the referendum.

Meanwhile the political circus and business uncertainty triggered by the Brexit vote goes on.

Holding pattern

The bulk of the government’s response to the DCMS interim report entails flagging a number of existing and/or ongoing consultations and reviews — such as the ‘Protecting the Debate: Intimidating, Influence and Information‘ consultation, which it launched this summer.

But by saying it’s continuing to gather evidence on a number of fronts the government is also saying it does not feel it’s necessary to rush through any regulatory responses to technology-accelerated, socially divisive/politically sensitive viral nonsense — claiming also that it hasn’t seen any evidence that malicious misinformation has been able to skew genuine democratic debate on the domestic front.

It’ll be music to Facebook’s ears given the awkward scrutiny the company has faced from lawmakers at home and, indeed, elsewhere in Europe — in the wake of a major data misuse scandal with a deeply political angle.

The government also points multiple times to a forthcoming oversight body which is in the process of being established — aka the Centre for Data Ethics and Innovation — saying it expects this to grapple with a number of the issues of concern raised by the committee, such as ad transparency and targeting; and to work towards agreeing best practices in areas such as “targeting, fairness, transparency and liability around the use of algorithms and data-driven technologies”.

Identifying “potential new regulations” is another stated role for the future body. Though given it’s not yet actively grappling with any of these issues the UK’s democratically concerned citizens are simply being told to wait.

“The government recognises that as technological advancements are made, and the use of data and AI becomes more complex, our existing governance frameworks may need to be strengthened and updated. That is why we are setting up the Centre,” the government writes, still apparently questioning whether legislative updates are needed — this in a response to the committee’s call, informed by its close questioning of tech firms and data experts, for an oversight body to be able to audit “non-financial” aspects of technology companies (including security mechanism and algorithms) to “ensure they are operating responsibly”.

“As set out in the recent consultation on the Centre, we expect it to look closely at issues around the use of algorithms, such as fairness, transparency, and targeting,” the government continues, noting that details of the body’s initial work program will be published in the fall — when it says it will also put out its response to the aforementioned consultation.

It does not specify when the ethics body will be in any kind of position to hit this shifty ground running. So again there’s zero sense the government intends to act at a pace commensurate with the fast-changing technologies in question.

Then, where the committee’s recommendations touch on the work of existing UK oversight bodies, such as Competition and Markets Authority, the ICO data watchdog, the Electoral Commission and the National Crime Agency, the government dodges specific concerns by suggesting it’s not appropriate for it to comment “on independent bodies or ongoing investigations”.

Also notable: It continues to reject entirely the idea that Russian-backed disinformation campaigns have had any impact on domestic democratic processes at all — despite public remarks by prime minister Theresa May  last year generally attacking Putin for weaponizing disinformation for election interference purposes.

Instead it writes:

We want to reiterate, however, that the Government has not seen evidence of successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. But we are not being complacent and the Government is actively engaging with partners to develop robust policies to tackle this issue.

Its response on this point also makes no reference of the extensive use of social media platforms to run political ads targeting the 2016 Brexit referendum.

Nor does it make any note of the historic lack of transparency of such ad platforms. Which means that it’s simply not possible to determine where all the ad money came from to fund digital campaigning on domestic issues — with Facebook only just launching a public repository of who is paying for political ads and badging them as such in the UK, for example.

The elephant in the room is of course that ‘lack of evidence’ is not necessarily evidence of a lack of success, especially when it’s so hard to extract data from opaque adtech platforms in the first place.

Moreover, just this week fresh concerns have been raised about how platforms like Facebook are still enabling dark ads to target political messages at citizens — without it being transparently clear who is actually behind and paying for such campaigns…

In turn triggering calls from opposition MPs for updates to UK election law…

Yet the government, busily embroiled as it still is with trying to deliver some kind of Brexit outcome, is seemingly unconcerned by all this unregulated, background ongoing political advertising.

It also directly brushes off the committee’s call for it to state how many investigations are currently being carried out into Russian interference in UK politics, saying only that it has taken steps to ensure there is a “coordinated structure across all relevant UK authorities to defend against hostile foreign interference in British politics, whether from Russia or any other State”, before reiterating: “There has, however, been no evidence to date of any successful foreign interference.”

This summer the Electoral Commission found that the official Vote Leave campaign in the UK’s in/out EU referendum had broken campaign spending rules — with social media platforms being repurposed as the unregulated playing field where election law could be diddled at such scale. That much is clear.

The DCMS committee had backed the Commission’s call for digital imprint requirements for electronic campaigns to level the playing field between digital and print ads.

However the government has failed to back even that pretty uncontroversial call, merely pointing again to a public consultation (which ends today) on proposed changes to electoral law. So it’s yet more wait and see.

The committee is also disappointed about the lack of government response to its call for the Commission to establish a code for advertising through social media during election periods; and its recommendation that “Facebook and other platforms take responsibility for the way their platforms are used” — noting also the government made “no response to Facebook’s failure to respond adequately to the Committee’s inquiry and Mark Zuckerberg’s reluctance to appear as a witness“. (A reluctance that really enraged the committee.)

In a statement on the government’s response, committee chair Damian Collins writes: “The government’s response to our interim report on disinformation and ‘fake news’ is disappointing and a missed opportunity. It uses other ongoing investigations to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

“We need to see a more coordinated approach across government to combat campaigns of disinformation being organised by Russian agencies seeking to disrupt and undermine our democracy. The government’s response gives us no real indication of what action is being taken on this important issue.”

Collins finds one slender crumb of comfort, though, that the government might have some appetite to rule big tech.

After the committee had called for government to “demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity”, the government writes that it: “has made it clear to Facebook, and other social media companies, that they must do more to remove illegal and harmful content”; and noting also that its forthcoming Online Harms White Paper will include “a range of policies to tackle harmful content”.

“We welcome though the strong words from the Government in its demand for action by Facebook to tackle the hate speech that has contributed to the ethnic cleansing of the Rohingya in Burma,” notes Collins, adding: “We will be looking for the government to make progress on these and other areas in response to our final report which will be published in December.

“We will also be raising these issues with the Secretary of State for DCMS, Jeremy Wright, when he gives evidence to the Committee on Wednesday this week.”

(Wright being the new minister in charge of the UK’s digital brief, after Matt Hancock moved over to health.)

We’ve reached out to Facebook for comment on the government’s call for a more robust approach to illegal hate speech.

Last week the company announced it had hired former UK deputy prime minister, Nick Clegg, to be its new head of global policy and comms — apparently signalling a willingness to pay a bit more attention to European regulators.