All posts in “child safety”

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.

YouTube faces brand freeze over ads and obscene comments on videos of kids


YouTube is firefighting another child safety content moderation scandal which has led several major brands to suspend advertising on its platform.

On Friday investigations by the BBC and The Times reported finding obscene comments on videos of children uploaded to YouTube.

Only a small minority of the comments were removed after being flagged to the company via YouTube’s ‘report content’ system. The comments and their associated accounts were only removed after the BBC contacted YouTube via press channels, it said.

While The Times reported finding adverts by major brands being also shown alongside videos depicting children in various states of undress and accompanied by obscene comments.

Brands freezing their YouTube advertising over the issue include Adidas, Deutsche Bank, Mars, Cadburys and Lidl, according to The Guardian.

Responding to the issues being raised a YouTube spokesperson said it’s working on an urgent fix — and told us that ads should not have been running alongside this type of content.

“There shouldn’t be any ads running on this content and we are working urgently to fix this. Over the past year, we have been working to ensure that YouTube is a safe place for brands. While we have made significant changes in product, policy, enforcement and controls, we will continue to improve,” said the spokesperson.

Also today, BuzzFeed reported that a pedophilic autofill search suggestion was appearing on YouTube over the weekend if the phrase “how to have” was typed into the search box.

On this, the YouTube spokesperson added: “Earlier today our teams were alerted to this profoundly disturbing autocomplete result and we worked to quickly remove it as soon as we were made aware. We are investigating this matter to determine what was behind the appearance of this autocompletion.”

Earlier this year scores of brands pulled advertising from YouTube over concerns ads were being displayed alongside offensive and extremist content, including ISIS propaganda and anti-semitic hate speech.

Google responded by beefing up YouTube’s ad policies and enforcement efforts, and by giving advertisers new controls that it said would make it easier for brands to exclude “higher risk content and fine-tune where they want their ads to appear”.

In the summer it also made another change in response to content criticism — announcing it was removing the ability for makers of “hateful” content to monetize via its baked in ad network, pulling ads from being displayed alongside content that “promotes discrimination or disparages or humiliates an individual or group of people”.

At the same time it said it would bar ads from videos that involve family entertainment characters engaging in inappropriate or offensive behavior.

This month further criticism was leveled at the company over the latter issue, after a writer’s Medium post shone a critical spotlight on the scale of the problem. And last week YouTube announced another tightening of the rules around content aimed at children — including saying it would beef up comment moderation on videos aimed at kids, and that videos found to have inappropriate comments about children would have comments turned off altogether.

But it looks like this new tougher stance over offensive comments aimed at kids was not yet being enforced at the time of the media investigations.

The BBC said the problem with YouTube’s comment moderation system failing to remove obscene comments targeting children was brought to its attention by volunteer moderators participating in YouTube’s (unpaid) Trusted Flagger program.

Over a period of “several weeks” it said that five of the 28 obscene comments it had found and reported via YouTube’s ‘flag for review’ system were deleted. However no action was taken against the remaining 23 — until it contacted YouTube as the BBC and provided a full list. At that point it says all of the “predatory accounts” were closed within 24 hours.

It also cited sources with knowledge of YouTube’s content moderation systems who claim associated links can be inadvertently stripped out of content reports submitted by members of the public — meaning YouTube employees who review reports may be unable to determine which specific comments are being flagged.

Although they would still be able to identify the account being associated with the comments.

The BBC also reported criticism directed at YouTube by members of its Trusted Flaggers program, saying they don’t feel adequately supported and arguing the company could be doing much more.

“We don’t have access to the tools, technologies and resources a company like YouTube has or could potentially deploy,” it was told. “So for example any tools we need, we create ourselves.

“There are loads of things YouTube could be doing to reduce this sort of activity, fixing the reporting system to start with. But for example, we can’t prevent predators from creating another account and have no indication when they do so we can take action.”

Google does not disclose exactly how many people it employs to review content — reporting only that “thousands” of people at Google and YouTube are involved in reviewing and taking action on content and comments identified by its systems or flagged by user reports.

These human moderators are also used to train and develop in-house machine learning systems that are also used for content review. But while tech companies have been quick to try to use AI engineering solution to fix content moderation, Facebook CEO Mark Zuckerberg himself has said that context remains a hard problem for AI to solve.

Highly effective automated comment moderation systems simply do not yet exist. And ultimately what’s needed is far more human review to plug the gap. Albeit that would be a massive expense for tech platforms like YouTube and Facebook that are hosting (and monetizing) user generated content at such vast scale.

But with content moderation issues continuing to rise up the political agenda, not to mention causing recurring problems with advertisers, tech giants may find themselves being forced to direct a lot more of their resources towards scrubbing problems lurking in the darker corners of their platforms.

Featured Image: nevodka/iStock Editorial

Call to ban sale of IoT toys with proven security flaws


Ahead of 2017’s present buying season, UK consumer rights group Which? has warned parents about the risks of giving connected toys to their children, and called for devices with known security and/or privacy risks to be banned from sale on kids safety grounds.

Working with security researchers the group has spent the past 12 months investigating several popular Bluetooth or wi-fi toys that are on sale at major retailers, and says it found “concerning vulnerabilities” in several devices that could “enable anyone to effectively talk to a child through their toy”.

It’s published specific findings on four of the toys it looked at: Namely the Furby Connect; I-Que Intelligent Robot; Toy-fi Teddy; and CloudPets cuddly toy.

The latter toy drew major criticism from security experts in February when it was discovered that its maker had stored thousands of unencrypted voice recordings of kids and parents using the toy in a publicly accessible online database — with no authentication required to access the data. (Data was subsequently deleted and ransomed.)

Which? says in all cases it was found to be far too easy for someone to illicitly pair their own device to the toys and use the tech to talk to a child. It especially highlights Bluetooth connections not having been properly secured — noting for example there was no requirement for a user to enter a password, PIN code or any other authentication to gain access.

“That person would need hardly any technical know-how to ‘hack’ your child’s toy,” it writes. “Bluetooth has a range limit, usually 10 meters, so the immediate concern would be someone with malicious intentions nearby. However, there are methods for extending Bluetooth range, and it’s possible someone could set up a mobile system in a vehicle to trawl the streets hunting for unsecured toys.”

In the case of the Furby, Which?’s external security researchers also thought it would be possible for someone to re-engineer its firmware to turn the toy into a listening device due to a vulnerability they found in the toy’s design (which it’s not publicly disclosing).

Although they were not themselves able to do this during the time they had for the investigation.

Which? describes its findings as “the tip of a very worrying iceberg” — also flagging other concerns raised over kids’ IoT devices from several European regulatory bodies.

Last month, for example, the Norwegian Consumer Council warned over similar security and privacy concerns pertaining to kids’ smartwatches.

This summer the FBI also issued a consumer notice warning that IoT toys “could put the privacy and safety of children at risk due to the large amount of personal information that may be unwittingly disclosed”.

“You wouldn’t let a young child play with a smartphone unsupervised and our investigation shows parents need to apply the same level of caution if considering giving a child a connected toy,” said Alex Neill, Which? MD of home products and services in a statement.

“While there is no denying the huge benefits these devices can bring to our daily lives, safety and security should be the absolute priority. If that can’t be guaranteed, then the products should not be sold.”

Facebook’s content moderation rules dubbed ‘alarming’ by child safety charity


The Guardian has published details of Facebook’s content moderation guidelines covering controversial issues such as violence, hate speech and self-harm culled from more than 100 internal training manuals, spreadsheets and flowcharts that the newspaper has seen.

The documents set out in black and white some of the contradictory positions Facebook has adopted for dealing with different types of disturbing content as it tries to balance taking down content with holding its preferred line on “free speech.” This goes some way toward explaining why the company continues to run into moderation problems. That and the tiny number of people it employs to review and judge flagged content.

The internal moderation guidelines show, for example, that Facebook allows the sharing of some photos of non-sexual child abuse, such as depictions of bullying, and will only remove or mark up content if there is deemed to be a sadistic or celebratory element.

Facebook is also comfortable with imagery showing animal cruelty — with only content that is deemed “extremely upsetting” to be marked up as disturbing.

And the platform apparently allows users to live stream attempts to self-harm — because it says it “doesn’t want to censor or punish people in distress.”

When it comes to violent content, Facebook’s guidelines allow videos of violent deaths to be shared, while marked as disturbing, as it says they can help create awareness of issues. While certain types of generally violent written statements — such as those advocating violence against women, for example — are allowed to stand as Facebook’s guidelines require what it deems “credible calls for action” in order for violent statements to be removed.

The policies also include guidelines for how to deal with revenge porn. For this type of content to be removed Facebook requires three conditions are fulfilled — including that the moderator can “confirm” a lack of consent via a “vengeful context” or from an independent source, such as a news report.

According to a leaked internal document seen by The Guardian, Facebook had to assess close to 54,000 potential cases of revenge porn in a single month.

Other details from the guidelines show that anyone with more than 100,000 followers is designated a public figure and so denied the protections afforded to private individuals; and that Facebook changed its policy on nudity following the outcry over its decision to remove an iconic Vietnam war photograph depicting a naked child screaming. It now allows for “newsworthy exceptions” under its “terror of war” guidelines. (Although images of child nudity in the context of the Holocaust are not allowed on the site.)

The exposé of internal rules comes at a time when the social media giant is under mounting pressure for the decisions it makes on content moderation.

In April, for example, the German government backed a proposal to levy fines of up to €50 million on social media platforms for failing to remove illegal hate speech promptly. A U.K. parliamentary committee has also this month called on the government to look at imposing fines for content moderation failures. While, earlier this month, an Austrian court ruled Facebook must remove posts deemed to be hate speech — and do so globally, rather than just blocking their visibility locally.

At the same time, Facebook’s live streaming feature has been used to broadcast murders and suicides, with the company apparently unable to preemptively shut off streams.

In the wake of the problems with Facebook Live, earlier this month the company said it would be hiring 3,000 extra moderators — bringing its total headcount for reviewing posts to 7,500. However this remains a drop in the ocean for a service that has close to two billion users who are sharing an aggregate of billions of pieces of content daily.

Asked for a response to Facebook’s moderation guidelines, a spokesperson for the U.K.’s National Society for the Prevention of Cruelty to Children described the rules as “alarming” and called for independent regulation of the platform’s moderation policies — backed up with fines for non-compliance.

Social media companies… need to be independently regulated and fined when they fail to keep children safe.

“This insight into Facebook’s rules on moderating content is alarming to say the least,” the spokesperson told us. “There is much more Facebook can do to protect children on their site. Facebook, and other social media companies, need to be independently regulated and fined when they fail to keep children safe.”

In its own statement responding to The Guardian’s story, Facebook’s Monika Bickert, head of global policy management, said: “Keeping people on Facebook safe is the most important thing we do. We work hard to make Facebook as safe as possible while enabling free speech. This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously. Mark Zuckerberg recently announced that over the next year, we’ll be adding 3,000 people to our community operations team around the world — on top of the 4,500 we have today — to review the millions of reports we get every week, and improve the process for doing it quickly.”

She also said Facebook is investing in technology to improve its content review process, including looking at how it can do more to automate content review — although it’s currently mostly using automation to assist human content reviewers.

“In addition to investing in more people, we’re also building better tools to keep our community safe,” she said. “We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”

CEO Mark Zuckerberg has previously talked about using AI to help parse and moderate content at scale — although he also warned such technology is likely years out.

Facebook is clearly pinning its long-term hopes for the massive content moderation problem it is saddled with on future automation. However the notion that algorithms can intelligently judge such human complexities as when nudity may or may not be appropriate is very much an article of faith on the part of the technoutopianists.

The harder political reality for Facebook is that pressure from the outcry over its current content moderation failures will force it to employ a lot more humans to clean up its act in the short term.

Add to that, as these internal moderation guidelines show, Facebook’s own position in apparently wanting to balance openness/free expression with “safety” is inherently contradictory — and invites exactly the sorts of problems it’s running into with content moderation controversies.

It would be relatively easy for Facebook to ban all imagery showing animal cruelty, for example — but such a position is apparently “too safe” for Facebook. Or rather too limiting of its ambition to be the global platform for sharing. And every video of a kicked dog is, after all, a piece of content for Facebook to monetize. Safe to say, living with that disturbing truth is only going to get more uncomfortable for Facebook.

In its story, The Guardian quotes a content moderation expert, called Sarah T Roberts, who argues that Facebook’s content moderation problem is a result of the vast scale of its “community.” “It’s one thing when you’re a small online community with a group of people who share principles and values, but when you have a large percentage of the world’s population and say ‘share yourself,’ you are going to be in quite a muddle,” she said. “Then when you monetise that practice you are entering a disaster situation.”

Update: Also responding to Facebook’s guidelines, Eve Critchley, head of digital at U.K. mental health charity Mind, said the organization is concerned the platform is not doing enough. “It is important that they recognize their responsibility in responding to high risk content. While it is positive that Facebook has implemented policies for moderators to escalate situations when they are concerned about someone’s safety, we remain concerned that they are not robust enough,” she told us.

“Streaming people’s experience of self-harm or suicide is an extremely sensitive and complex issue,” she added. “We don’t yet know the long-term implications of sharing such material on social media platforms for the public and particularly for vulnerable people who may be struggling with their own mental health. What we do know is that there is lots of evidence showing that graphic depictions of such behavior in the media can be very harmful to viewers and potentially lead to imitative behavior. As such we feel that social media should not provide a platform to broadcast content of people hurting themselves.

“Social media can be used in a positive way and can play a really useful role in a person’s wider support network, but it can also pose risks. We can’t assume that an individual’s community will have the knowledge or understanding necessary, or will be sympathetic in their response. We also fear that the impact on those watching would not only be upsetting but could also be harmful to their own mental health.

“Facebook and other social media sites must urgently explore ways to make their online spaces safe and supportive. We would encourage anyone managing or moderating an online community to signpost users to sources of urgent help, such as Mind, Samaritans or 999 when appropriate.”

Featured Image: Twin Design/Shutterstock

Social media firms should face fines for hate speech failures, urge UK MPs


Social media giants Facebook, YouTube and Twitter have once again been accused of taking a “laissez-faire approach” to moderating hate speech content on their platforms.

This follows a stepping up of political rhetoric against social platforms in recent months in the UK, following a terror attack in London in March — after which Home Secretary Amber Rudd called for tech firms to do more to help block the spread of terrorist content online.

In a highly critical report looking at the spread of hate, abuse and extremism on Facebook, YouTube and Twitter, a UK parliamentary committee has suggested the government looks at imposing fines on social media forms for content moderation failures.

It’s also calling for a review of existing legislation to ensure clarity about how the law applies in this area.

“Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe,” the committee writes in the report.

Last month, the German government backed a draft law which includes proposals to fine social media firms up to €50 million if they fail to remove illegal hate speech within 24 hours after a complaint is made.

A Europe Union-wide Code of Conduct on swiftly removing hate speech, which was agreed between the Commission and social media giants a year ago, does not include any financial penalties for failure — but there are signs some European governments are becoming convinced of the need to legislate to force social media companies to improve their content moderation practices.

The UK Home Affairs committee report describes it as “shockingly easy” to find examples of material intended to stir up hatred against ethnic minorities on all three of the social media platforms it looked at for the report.

It urges social media companies to introduce “clear and well-funded arrangements for proactively identifying and removing illegal content — particularly dangerous terrorist content or material related to online child abuse”, calling for similar co-operation and investment to combat extremist content as the tech giants have already put into collaborating to tackle the spread of child abuse imagery online.

The committee’s investigation, which started in July last year following the murder of a UK MP by a far right extremist, was intended to be more wide-ranging. However, because the work was cut short by the UK government calling an early general election the committee says it has published specific findings on how social media companies are addressing hate crime and illegal content online — having taken evidence for this from Facebook, Google and Twitter.

“It is very clear to us from the evidence we have received that nowhere near enough is being done. The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe,” it writes.

“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.”

The committee flags multiple examples where it says extremist content was reported to the tech giants but these reports were not acted on adequately — calling out Google, especially, for “weakness and delays” in response to reports it made of illegal neo-Nazi propaganda on YouTube.

It also notes the three companies refused to tell it exactly how many people they employ to moderate content, and exactly how much they spend on content moderation.

The report makes especially uncomfortable reading for Google with the committee directly accusing it of profiting from hatred — arguing it has allowed YouTube to be “a platform from which extremists have generated revenue”, and pointing to the recent spate of advertisers pulling their marketing content from the platform after it was shown being displayed alongside extremist videos. Google responded to the high profile backlash from advertisers by pulling ads from certain types of content.

“Social media companies rely on their users to report extremist and hateful content for review by moderators. They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. We believe that it is unacceptable that social media companies are not taking greater responsibility for identifying illegal content themselves,” the committee writes.

“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.”

The committee suggests social media firms should have to contribute to the cost to the taxpayer of policing their platforms — pointing to how football teams are required to pay for policing in their stadiums and the immediate surrounding areas under UK law as an equivalent model.

It is also calling for social media firms to publish quarterly reports on their safeguarding efforts, including —

  • analysis of the number of reports received on prohibited content
  • how the companies responded to reports
  • what action is being taken to eliminate such content in the future

“It is in everyone’s interest, including the social media companies themselves, to find ways to reduce pernicious and illegal material,” the committee writes. “Transparent performance reports, published regularly, would be an effective method to drive up standards radically and we hope it would also encourage competition between platforms to find innovative solutions to these persistent problems. If they refuse to do so, we recommend that the government consult on requiring them to do so.”

The report, which is replete with pointed adjectives like “shocking”, “shameful”, “irresponsible” and “unacceptable”, follows several critical media reports in the UK which highlighted examples of moderation failures on social media platforms, and showed extremist and paedophilic content continuing to be spread on social media platforms.

Responding to the committee’s report, a YouTube spokesperson told us: “We take this issue very seriously. We’ve recently tightened our advertising policies and enforcement; made algorithmic updates; and are expanding our partnerships with specialist organisations working in this field. We’ll continue to work hard to tackle these challenging and complex problems”.

In a statement, Simon Milner, director of policy at Facebook, added:  “Nothing is more important to us than people’s safety on Facebook. That is why we have quick and easy ways for people to report content, so that we can review, and if necessary remove, it from our platform. We agree with the Committee that there is more we can do to disrupt people wanting to spread hate and extremism online. That’s why we are working closely with partners, including experts at Kings College, London, and at the Institute for Strategic Dialogue, to help us improve the effectiveness of our approach. We look forward to engaging with the new Government and parliament on these important issues after the election.”

Nick Pickles, Twitter’s UK head of public policy, provided this statement: “Our Rules clearly stipulate that we do not tolerate hateful conduct and abuse on Twitter. As well as taking action on accounts when they’re reported to us by users, we’ve significantly expanded the scale of our efforts across a number of key areas. From introducing a range of brand new tools to combat abuse, to expanding and retraining our support teams, we’re moving at pace and tracking our progress in real-time. We’re also investing heavily in our technology in order to remove accounts who deliberately misuse our platform for the sole purpose of abusing or harassing others. It’s important to note this is an ongoing process as we listen to the direct feedback of our users and move quickly in the pursuit of our mission to improve Twitter for everyone.”

The committee says it hopes the report will inform the early decisions of the next government — with the UK general election due to take place on June 8 — and feed into “immediate work” by the three social platforms to be more pro-active about tackling extremist content.

Commenting on the publication of the report yesterday, Home Secretary Amber Rudd told the BBC she expected to see “early and effective action” from the tech giants.

Featured Image: Twin Design/Shutterstock