All posts in “Uk Government”

The UK now has a law against upskirting

A law change that comes into force in the UK today makes the highly intrusive practice of ‘upskirting’ illegal.

The government said it wants the new law to send a clear message that such behaviour is criminal and will not be tolerated.

Perpetrators in the UK face up to two years in prison under the new law if they’re convicted of taking a photograph or video underneath a person’s clothes for the purpose of viewing their underwear or genitals/buttocks without their knowledge or consent for sexual gratification or to cause humiliation, distress or alarm.

There have been prosecutions for upskirting in England and Wales under an existing common law offence of outraging public decency. But following a campaign started by an upskirting victim the government decided to legislate to plug gaps in the law to make it a sexual offence.

The Voyeurism (Offences) (No. 2) Bill was introduced on June 21 last year and gains royal assent today.

Where the offence of upskirting is committed in order to obtain sexual gratification it can result in the most serious offenders being placed on the sex offenders register.

Under the new law victims are also entitled to automatic protection, such as from being identified in the media.

While the UK government is intending the law change to send a clear message that upskirting is socially unacceptable, there’s no doubt that legislation alone can’t do that. Robust enforcement is essential to counter any problematic attitudes that might be contributing to encourage antisocial uses of technologies in the first place.

For example, in South Korea a law against upskirting carries a maximum sentence of five years in prison yet the legislation has failed to curb an epidemic of offences fuelled by cheap access to tiny hidden spy cameras and baked in societal sexism — the latter seemingly also influencing how police choose to uphold the law, with campaigners complaining most perpetrators get off with small fines.

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.

Instagram’s Adam Mosseri to meet UK health secretary over suicide content concerns

The still fresh-in-post boss of Instagram, Adam Mosseri, has been asked to meet the UK’s health secretary, Matt Hancock, to discuss the social media platform’s handling of content that promotes suicide and self harm, the BBC reports.

Mosseri’s summons follows an outcry in the UK over disturbing content being recommended to vulnerable users of Instagram, following the suicide of a 14 year old schoolgirl, Molly Russell, who killed herself in 2017.

After her death, Molly’s family discovered she had been following a number of Instagram accounts that encouraged self-harm. Speaking to the BBC last month Molly’s father said he did not doubt the platform had played a role in her decision to kill herself.

Writing in the Telegraph newspaper today, Mosseri makes direct reference to Molly’s tragedy, saying he has been “deeply moved” by her story and those of other families affected by self-harm and suicide, before going on to admit that Instagram is “not yet where we need to be on the issues”.

“We rely heavily on our community to report this content, and remove it as soon as it’s found,” he writes, conceding that the platform has offloaded the lion’s share of responsibility for content policing onto users thus far. “The bottom line is we do not yet find enough of these images before they’re seen by other people,” he admits.

Mossari then uses the article to announce a couple of policy changes in response to the public outcry over suicide content.

Beginning this week, he says Instagram will begin adding “sensitivity screens” to all content it reviews which “contains cutting”. “These images will not be immediately visible, which will make it more difficult for people to see them,” he suggests.

Though that clearly won’t stop fresh uploads from being distributed unscreened. (Nor prevent young and vulnerable users clicking to view disturbing content regardless.)

Mossari justifies Instagram’s decision not to blanket-delete all content related to self-harm and/or suicide by saying its policy is to “allow people to share that they are struggling even if that content no longer shows up in search, hashtags or account recommendations”.

We’ve taken a hard look at our work and though we have been focused on the individual who is vulnerable to self harm, we need to do more to consider the effect of self-harm images on those who may be inclined to follow suit,” he continues. “This is a difficult but important balance to get right. These issues will take time, but it’s critical we take big steps forward now. To that end we have started to make changes.”

Another policy change he reveals is that Instagram will stop its algorithms actively recommending additional self-harm content to vulnerable users. “[F]or images that don’t promote self-harm, we let them stay on the platform, but moving forward we won’t recommend them in search, hashtags or the Explore tab,” he writes.

Unchecked recommendations have opened Instagram up to accusations that it essentially encourages depressed users to self-harm (or even suicide) by pushing more disturbing content into their feeds once they start to show an interest.

So putting limits on how algorithms distribute and amplify sensitive content is an obvious and overdue step — but one that’s taken significant public and political attention for the Facebook -owned company to make.

Last year the UK government announced plans to legislate on social media and safety, though it has yet to publish details of its plans (a white paper setting out platforms’ responsibilities is expected in the next few months). But just last week a UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect minors.

In a statement given to the BBC, the Department for Digital, Culture, Media and Sport confirmed such a legal duty remains on the table. “We have heard calls for an internet regulator and to place a statutory ‘duty of care’ on platforms, and are seriously considering all options,” it said.

There’s little doubt that the prospect of safety-related legislation incoming in a major market for the platform — combined with public attention on Molly’s tragedy — has propelled the issue to the top of the Instagram chief’s inbox.

Mossari writes now that Instagram began “a comprehensive review last week” with a focus on “supporting young people”, adding that the revised approach entails reviewing content policies, investing in technology to “better identify sensitive images at scale” and applying measures to make such content “less discoverable”. 

He also says it’s “working on more ways” to link vulnerable users to third party resources, such as by connecting them with organisations it already works with on user support, such as Papyrus and Samaritans. But he concedes the platform needs to “do more to consider the effect of self-harm images on those who may be inclined to follow suit” — not just on the poster themselves. 

“This week we are meeting experts and academics, including Samaritans, Papyrus and Save.org, to talk through how we answer these questions,” he adds. “We are committed to publicly sharing what we learn. We deeply want to get this right and we will do everything we can to make that happen.”

We’ve reached out to Facebook, Instagram’s parent, for further comment.

One way user-generated content platforms could support the goal of better understanding impacts of their own distribution and amplification algorithms is to provide high quality data to third party researchers so they can interrogate platform impacts.

That was another of the recommendations from the UK’s science and technology committee last week. But it’s not yet clear whether Mossari’s commitment to sharing what Instagram learns from meetings with academics and experts will also result in data flowing the other way — i.e. with the proprietary platform sharing its secrets with experts so they can robustly and independently study social media’s antisocial impacts.

Recommendation algorithms lie at center of many of social media’s perceived ills — and the problem scales far beyond any one platform. YouTube’s recommendation engines have, for example, also long been criticized for having a similar ‘radicalizating’ impact — such as by pushing viewers of conservative content to far more extreme/far right and/or conspiracy theorist views.

With the huge platform power of tech giants in the spotlight, it’s clear that calls for increased transparency will only grow — unless or until regulators make access to and oversight of platforms’ data and algorithms a legal requirement.

Social media should have “duty of care” towards kids, UK MPs urge

Social media platforms are being urged to be far more transparent about how their services operate and to make “anonymised high-level data” available to researchers so the technology’s effects on users — and especially on children and teens — can be better understood.

The calls have been made in a report by the UK parliament’s Science and Technology Committee which has been looking into the impacts of social media and screen use among children — to consider whether such tech is “healthy or harmful”.

“Social media companies must also be far more open and transparent regarding how they operate and particularly how they moderate, review and prioritise content,” it writes.

Concerns have been growing about children’s use of social media and mobile technology for some years now, with plenty of anecdotal evidence and also some studies linking tech use to developmental problems, as well as distressing stories connecting depression and even suicide to social media use.

Although the committee writes that its dive into the topic was hindered by “the limited quantity and quality of academic evidence available”. But it also asserts: “The absence of good academic evidence is not, in itself, evidence that social media and screens have no effect on young people.”

“We found that the majority of published research did not provide a clear indication of causation, but instead indicated a possible correlation between social media/screens and a particular health effect,” it continues. “There was even less focus in published research on exactly who was at risk and if some groups were potentially more vulnerable than others when using screens and social media.”

The UK government expressed its intention to legislate in this area, announcing a plan last May to “make social media safer” — promising new online safety laws to tackle concerns.

The committee writes that it’s therefore surprised the government has not commissioned “any new, substantive research to help inform its proposals”, and suggests it get on and do so “as a matter of urgency” — with a focus on identifying people at risk of experiencing harm online and on social media; the reasons for the risk factors; and the longer-term consequences of the tech’s exposure on children.

It further suggests the government should consider what legislation is required to improve researchers’ access to this type of data, given platforms have failed to provide enough access for researchers of their own accord.

The committee says it heard evidence of a variety of instances where social media could be “a force for good” but also received testimonies about some of the potential negative impacts of social media on the health and emotional wellbeing of children.

“These ranged from detrimental effects on sleep patterns and body image through to cyberbullying, grooming and ‘sexting’,” it notes. “Generally, social media was not the root cause of the risk but helped to facilitate it, while also providing the opportunity for a large degree of amplification. This was particularly apparent in the case of the abuse of children online, via social media.

“It is imperative that the government leads the way in ensuring that an effective partnership is in place, across civil society, technology companies, law enforcement agencies, the government and non-governmental organisations, aimed at ending child sexual exploitation (CSE) and abuse online.”

The committee suggests the government commission specific research to establish the scale and prevalence of online CSE — pushing it to set an “ambitious target” to halve reported online CSE in two years and “all but eliminate it in four”.

A duty of care

A further recommendation will likely send a shiver down tech giants’ spines, with the committee urging a duty of care principle be enshrined in law for social media users under 18 years of age to protect them from harm when on social media sites.

Such a duty would up the legal risk stakes considerably for user generated content platforms which don’t bar children from accessing their services.

The committee suggests the government could achieve that by introducing a statutory code of practice for social media firms, via new primary legislation, to provide “consistency on content reporting practices and moderation mechanisms”.

It also recommends a requirement in law for social media companies to publish detailed Transparency Reports every six months.

It is also for a 24 hour takedown law for illegal content, saying that platforms should have to review reports of potentially illegal content and take a decision on whether to remove, block or flag it — and reply the decision to the individual/organisation who reported it — within 24 hours.

Germany already legislated for such a law, back in 2017 — though in that case the focus is on speeding up hate speech takedowns.

In Germany social media platforms can be fined up to €50 million if they fail to comply with the NetzDG law, as its truncated German name is known. (The EU executive has also been pushing platforms to remove terrorist related material within an hour of a report, suggesting it too could legislate on this front if they fail to moderate content fast enough.)

The committee suggests the UK’s media and telecoms regulator, Ofcom would be well-placed to oversee how illegal content is handled under any new law.

It also recommends that social media companies use AI to identify and flag to users (or remove as appropriate) content that “may be fake” — pointing to the risk posed by new technologies such as “deep fake videos”.

More robust systems for age verification are also needed, in the committee’s view. It writes that these must go beyond “a simple ‘tick box’ or entering a date of birth”.

Looking beyond platforms, the committee presses the government to take steps to improve children’s digital literacy and resilience, suggesting PSHE (personal, social and health) education should be made mandatory for primary and secondary school pupils — delivering “an age-appropriate understanding of, and resilience towards, the harms and benefits of the digital world”.

Teachers and parents should also not be overlooked, with the committee suggesting training and resources for teachers and awareness and engagement campaigns for parents.

UK police to get more powers to curb drone misuse after Gatwick fiasco

The UK government has announced new powers for police to tackle illegal use of drone technology, including powers to land, seize and search drones.

This follows the recent Gatwick drone fiasco when, just before Christmas, a spate of drone sightings near the airport caused a temporary shutdown of the runway, and disruptive misery for thousands of people at one of the busiest travel times of the year.

“The police will have the power to search premises and seize drones — including electronic data stored within the device — where a serious offence has been committed and a warrant is secured,” the government writes in a press release today, trailing its plans for a forthcoming drone bill.

Police powers to ground drones had already been announced as incoming in late 2017. But the Gatwick chaos and some trenchant criticism about government complacency about the risks posed by misuse of the technology appears to have concentrated ministerial minds on finding a few extra deterrents for police.

Such as the power to demand drone owners produce proper documentation for their craft, tied to an incoming national registration scheme which will apply to all drones weighing 250 grams or more.

“The vast majority of drone users fly safely and responsibly, and adhere to the rules and regulations that are in place. However, if a drone is used illegally we must ensure that the police have the powers to enforce the law, and that the most up to date technology is available to detect, track and potentially disrupt the drone,” the government writes today in its official response to a public consultation on drone safety regulation, adding that: “The recent disruption to Gatwick airport operations, affecting tens of thousands of passengers in the run up to Christmas, was a stark example of why continued action is required to make sure drones are used safely and securely in the UK.”

Under the new plan, police forces may in future only need “reasonable suspicion” that an offence has been committed to request evidence from drone owners.

The government is also planning to give police the ability to issue fixed penalty notices of up to £100 for minor drone offences.

It says the new powers will be set out in detail a (long delayed) draft drone bill now due this year — having failed to materialize last Spring, as originally promised.

“The new measures proposed in the consultation, such as giving the police the power to request evidence from drone users where there is reasonable suspicion of an offence being committed, were met with strong support from respondents,” the government also writes.

In another post-Gatwick development, it is planning to beef up stop-gap flight restriction rules by expanding the current 1km flight exclusion zone around airports to circa 5km.

The 1km zone had been widely criticized as inadequate.

(Screenshot, from: Taking Flight: The Future of Drones in the UK Government Response document)

All drone operators will be required to ask permission from an airport’s Air Traffic Control to fly within the larger exclusion zone, per the document.

The government says it does not believe the ~5km exclusion zone will prevent what it dubs a “deliberate incident” in itself. But suggests it will “help protect all arriving and departing aircraft using our aerodromes and avoid potential conflict with legitimate drone activity”.

Its response document also confirms the date for the previously announced drone registration scheme — saying this will come into force in November.

The government revealed the plan for a drone registration scheme in late 2017, when it said that owners of drones weighing more than 250 grams would in future be required to register their devices. But at the time of the Gatwick incident the scheme had not yet come into force.

Registration will apply from November 30, 2019, the government says now.

In a further announcement today, it say the Home Office will begin testing and assessing the “safe use” of a range of counter-drone technology in the UK.

“This crucial technology will detect drones from flying around sensitive sites, including airports and prisons, and develop a range of options to respond to drones, helping to prevent a repeat of incidents such as that recently experienced at Gatwick,” it writes.

Military grade counter-drone tech enabled Gatwick to reopen its runway despite continued drone sightings, according to the BBC, which reported last week that the airport had spent £5M to prevent future attacks (Gatwick did not disclose the exact system it had bought).

Commenting on the new policy measures, UK aviation minister, Liz Sugg, said in a statement: “Drones have the potential to bring significant benefits and opportunities, but with the speed of technological advancement comes risk, and safety and security must be our top priorities.

“That’s why we are giving the police powers to deal with those using drones irresponsibly. Along with additional safety measures these will help ensure the potential of this technology is harnessed in a responsible and safe way.”