All posts in “child safety”

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office.

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse which will be covered by further stringent requirements under the plan.

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although it reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, after which it says it will set out the action it will take in developing its final proposals for legislation.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident at this stage that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own.

The House of Lords committee was another that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”. And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle.

But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

Snap is under NDA with UK Home Office discussing how to centralize age checks online

Snap is under NDA with the UK’s Home Office as part of a working group tasked with coming up with more robust age verification technology that’s able to robustly identify children online.

The detail emerged during a parliamentary committee hearing as MPs in the Department for Digital, Culture, Media and Sport (DCMS) questioned Stephen Collins, Snap’s senior director for public policy international, and Will Scougal, director of creative strategy EMEA.

A spokesman in the Home Office press office hadn’t immediately heard of any discussions with the messaging company on the topic of age verification. But we’ll update this story with any additional context on the department’s plans if more info is forthcoming.

Under questioning by the committee Snap conceded its current age verification systems are not able to prevent under 13 year olds from signing up to use its messaging platform.

The DCMS committee’s interest here is it’s running an enquiry into immersive and addictive technologies.

Snap admitted that the most popular means of signing up to its app (i.e. on mobile) is where its age verification system is weakest, with Collins saying it had no ability to drop a cookie to keep track of mobile users to try to prevent repeat attempts to get around its age gate.

But he emphasized Snap does not want underage users on its platform.

“That brings us no advantage, that brings us no commercial benefit at all,” he said. “We want to make it an enjoyable place for everybody using the platform.”

He also said Snap analyzes patterns of user behavior to try to identify underage users — investigating accounts and banning those which are “clearly” determined not to be old enough to use the service.

But he conceded there’s currently “no foolproof way” to prevent under 13s from signing up.

Discussing alternative approaches to verifying kids’ age online the Snap policy staffer agreed parental consent approaches are trivially easy for children to circumvent — such as by setting up spoof email accounts or taking a photo of a parent’s passport or credit card to use for verification.

Social media company Facebook is one such company that relies a ‘parental consent’ system to ‘verify’ the age of teen users — though, as we’ve previously reported, it’s trivially easy for kids to workaround.

“I think the most sustainable solution will be some kind of central verification system,” Collins suggested, adding that such a system is “already being discussed” by government ministers.

“The home secretary has tasked the Home Office and related agencies to look into this — we’re part of that working group,” he continued.

“We actually met just yesterday. I can’t give you the details here because I’m under an NDA,” Collins added, suggesting Snap could send the committee details in writing.

“I think it’s a serious attempt to really come to a proper conclusion — a fitting conclusion to this kind of conundrum that’s been there, actually, for a long time.”

“There needs to be a robust age verification system that we can all get behind,” he added.

The UK government is expected to publish a White Paper setting out its policy ideas for regulating social media and safety before the end of the winter.

The detail of its policy plans remain under wraps so it’s unclear whether the Home Office intends to include setting up a centralized system of online age verification for robustly identifying kids on social media platforms as part of its safety-focused regulation. But much of the debate driving the planned legislation has fixed on content risks for kids online.

Such a step would also not be the first time UK ministers have pushed the envelop around online age verification.

A controversial system of age checks for viewing adult content is due to come into force shortly in the UK under the Digital Economy Act — albeit, after a lengthy delay. (And ignoring all the hand-wringing about privacy and security risks; not to mention the fact age checks will likely be trivially easy to dodge by those who know how to use a VPN etc, or via accessing adult content on social media.)

But a centralized database of children for age verification purposes — if that is indeed the lines along which the Home Office is thinking — sounds rather closer to Chinese government Internet controls.

Given that, in recent years, the Chinese state has been pushing games companies to age verify users to enforce limits on play time for kids (also apparently in response to health concerns around video gaming addiction).

The UK has also pushed to create centralized databases of web browsers’ activity for law enforcement purposes, under the 2016 Investigatory Powers Act. (Parts of which it’s had to rethink following legal challenges, with other legal challenges ongoing.)

In recent years it has also emerged that UK spy agencies maintain bulk databases of citizens — known as ‘bulk personal datasets‘ — regardless of whether a particular individual is suspected of a crime.

So building yet another database to contain children’s ages isn’t perhaps as off piste as you might imagine for the country.

Returning to the DCMS committee’s enquiry, other questions for Snap from MPs included several critical ones related to its ‘streaks’ feature — whereby users who have been messaging each other regularly are encouraged not to stop the back and forth.

The parliamentarians raised constituent and industry concerns about the risk of peer pressure being piled on kids to keep the virtual streaks going.

Snap’s reps told the committee the feature is intended to be a “celebration” of close friendship, rather than being intentionally designed to make the platform sticky and so encourage stress.

Though they conceded users have no way to opt out of streak emoji appearing.

They also noted they have previously reduced the size of the streak emoji to make it less prominent.

But they added they would take concerns back to product teams and re-examine the feature in light of the criticism.

You can watch the full committee hearing with Snap here.

YouTube under fire for recommending videos of kids with inappropriate comments

More than a year on from a child safety content moderation scandal on YouTube and it takes just a few clicks for the platform’s recommendation algorithms to redirect a search for “bikini haul” videos of adult women towards clips of scantily clad minors engaged in body contorting gymnastics or taking an icebath or ice lolly sucking “challenge”.

A YouTube creator called Matt Watson flagged the issue in a critical Reddit post, saying he found scores of videos of kids where YouTube users are trading inappropriate comments and timestamps below the fold, denouncing the company for failing to prevent what he describes as a “soft-core pedophilia ring” from operating in plain sight on its platform.

He has also posted a YouTube video demonstrating how the platform’s recommendation algorithm pushes users into what he dubs a pedophilia “wormhole”, accusing the company of facilitating and monetizing the sexual exploitation of children.

We were easily able to replicate the YouTube algorithm’s behavior that Watson describes in a history-cleared private browser session which, after clicking on two videos of adult women in bikinis, suggested we watch a video called “sweet sixteen pool party”.

Clicking on that led YouTube’s side-bar to serve up multiple videos of prepubescent girls in its ‘up next’ section where the algorithm tees-up related content to encourage users to keep clicking.

Videos we got recommended in this side-bar included thumbnails showing young girls demonstrating gymnastics poses, showing off their “morning routines”, or licking popsicles or ice lollies.

Watson said it was easy for him to find videos containing inappropriate/predatory comments, including sexually suggestive emoji and timestamps that appear intended to highlight, shortcut and share the most compromising positions and/or moments in the videos of the minors.

We also found multiple examples of timestamps and inappropriate comments on videos of children that YouTube’s algorithm recommended we watch.

Some comments by other YouTube users denounced those making sexually suggestive remarks about the children in the videos.

Back in November 2017 several major advertisers froze spending on YouTube’s platform after an investigation by the BBC and the Times discovered similarly obscene comments on videos of children.

Earlier the same month YouTube was also criticized over low quality content targeting kids as viewers on its platform.

The company went on to announce a number of policy changes related to kid-focused video, including saying it would aggressively police comments on videos of kids and that videos found to have inappropriate comments about the kids in them would have comments turned off altogether.

Some of the videos of young girls that YouTube recommended we watch had already had comments disabled — which suggests its AI had previously identified a large number of inappropriate comments being shared (on account of its policy of switching off comments on clips containing kids when comments are deemed “inappropriate”) — yet the videos themselves were still being suggested for viewing in a test search that originated with the phrase “bikini haul”.

Watson also says he found ads being displayed on some videos of kids containing inappropriate comments, and claims that he found links to child pornography being shared in YouTube comments too.

We were unable to verify those findings in our brief tests.

We asked YouTube why its algorithms skew towards recommending videos of minors, even when the viewer starts by watching videos of adult women, and why inappropriate comments remain a problem on videos of minors more than a year after the same issue was highlighted via investigative journalism.

The company sent us the following statement in response to our questions:

Any content — including comments — that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts. We continue to invest heavily in technology, teams and partnerships with charities to tackle this issue. We have strict policies that govern where we allow ads to appear and we enforce these policies vigorously. When we find content that is in violation of our policies, we immediately stop serving ads or remove it altogether.

A spokesman for YouTube also told us it’s reviewing its policies in light of what Watson has highlighted, adding that it’s in the process of reviewing the specific videos and comments featured in his video — specifying also that some content has been taken down as a result of the review.

Although the spokesman emphasized that the majority of the videos flagged by Watson are innocent recordings of children doing everyday things. (Though of course the problem is that innocent content is being repurposed and time-sliced for abusive gratification and exploitation.)

The spokesman added that YouTube works with the National Center for Missing and Exploited Children to report accounts found making inappropriate comments about kids to law enforcement.

In wider discussion about the issue the spokesman told us that determining context remains a challenge for its AI moderation systems.

On the human moderation front he said the platform now has around 10,000 human reviewers tasked with assessing content flagged for review.

The volume of video content uploaded to YouTube is around 400 hours per minute, he added.

There is still very clearly a massive asymmetry around content moderation on user generated content platforms, with AI poorly suited to plug the gap given ongoing weakness in understanding context, even as platforms’ human moderation teams remain hopelessly under-resourced and outgunned vs the scale of the task.

Another key point which YouTube failed to mention is the clear tension between advertising-based business models that monetize content based on viewer engagement (such as its own), and content safety issues that need to carefully consider the substance of the content and the context it’s been consumed in.

It’s certainly not the first time YouTube’s recommendation algorithms have been called out for negative impacts. In recent years the platform has been accused of automating radicalization by pushing viewers towards extremist and even terrorist content — which led YouTube to announce another policy change in 2017 related to how it handles content created by known extremists.

The wider societal impact of algorithmic suggestions that inflate conspiracy theories and/or promote bogus, anti-factual health or scientific content have also been repeatedly raised as a concern — including on YouTube.

And only last month YouTube said it would reduce recommendations of what it dubbed “borderline content” and content that “could misinform users in harmful ways”, citing examples such as videos promoting a fake miracle cure for a serious illness, or claiming the earth is flat, or making “blatantly false claims” about historic events such as the 9/11 terrorist attack in New York.

“While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community,” it wrote then. “As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”

YouTube said that change of algorithmic recommendations around conspiracy videos would be gradual, and only initially affect recommendations on a small set of videos in the US.

It also noted that implementing the tweak to its recommendation engine would involve both machine learning tech and human evaluators and experts helping to train the AI systems.

“Over time, as our systems become more accurate, we’ll roll this change out to more countries. It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube,” it added.

It remains to be seen whether YouTube will expand that policy shift and decide it must exercise greater responsibility in how its platform recommends and serves up videos of children for remote consumption in the future.

Political pressure may be one motivating force, with momentum building for regulation of online platforms — including calls for Internet companies to face clear legal liabilities and even a legal duty care towards users vis-a-vis the content they distribute and monetize.

For example UK regulators have made legislating on Internet and social media safety a policy priority — with the government due to publish a White Paper setting out its plans for ruling platforms this winter.

Dating apps face questions over age checks after report exposes child abuse

The UK government has said it could legislate to require age verification checks on users of dating apps, following an investigation into underage use of dating apps published by the Sunday Times yesterday.

The newspaper found more than 30 cases of child rape have been investigated by police related to use of dating apps including Grindr and Tinder since 2015. It reports that one 13-year-old boy with a profile on the Grindr app was raped or abused by at least 21 men. 

The Sunday Times also found 60 further instances of child sex offences related to the use of online dating services — including grooming, kidnapping and violent assault, according to the BBC, which covered the report.

The youngest victim is reported to have been just eight years old. The newspaper obtaining the data via freedom of information requests to UK police forces.

Responding to the Sunday Times’ investigation, a Tinder spokesperson told the BBC it uses automated and manual tools, and spends “millions of dollars annually”, to prevent and remove underage users and other inappropriate behaviour, saying it does not want minors on the platform.

Grindr also reacting to the report, providing the Times with a statement saying: “Any account of sexual abuse or other illegal behaviour is troubling to us as well as a clear violation of our terms of service. Our team is constantly working to improve our digital and human screening tools to prevent and remove improper underage use of our app.”

We’ve also reached out to the companies with additional questions.

The UK’s secretary of state for digital, media, culture and sport (DCMS), Jeremy Wright, dubbed the newspaper’s investigation “truly shocking”, describing it as further evidence that “online tech firms must do more to protect children”.

He also suggested the government could expand forthcoming age verification checks for accessing pornography to include dating apps — saying he would write to the dating app companies to ask “what measures they have in place to keep children safe from harm, including verifying their age”.

“If I’m not satisfied with their response, I reserve the right to take further action,” he added.

Age verification checks for viewing online porn are due to come into force in the UK in April, as part of the Digital Economy Act.

Those age checks, which are clearly not without controversy given the huge privacy considerations of creating a database of adult identities linked to porn viewing habits, have also been driven by concern about children’s exposure to graphic content online.

Last year the UK government committed to legislating on social media safety too, although it has yet to set out the detail of its policy plans. But a white paper is due imminently.

A parliamentary committee which reported last week urged the government to put a legal ‘duty of care’ on platforms to protect minors.

It also called for more robust systems for age verification. So it remains at least a possibility that some types of social media content could be age-gated in the country in future.

Last month the BBC reported on the death of a 14-year-old schoolgirl who killed herself in 2017 after being exposed to self-harm imagery on the platform.

Following the report, Instagram’s boss met with Wright and the UK’s health secretary, Matt Hancock, to discuss concerns about the impact of suicide-related content circulating on the platform.

After the meeting Instagram announced it would ban graphic images of self-harm last week.

Earlier the same week the company responded to the public outcry over the story by saying it would no longer allow suicide related content to be promoted via its recommendation algorithms or surfaced via hashtags.

Also last week, the government’s chief medical advisors called for a code of conduct for social media platforms to protect vulnerable users.

The medical experts also called for greater transparency from platform giants to support public interest-based research into the potential mental health impacts of their platforms.

YouTube faces brand freeze over ads and obscene comments on videos of kids


YouTube is firefighting another child safety content moderation scandal which has led several major brands to suspend advertising on its platform.

On Friday investigations by the BBC and The Times reported finding obscene comments on videos of children uploaded to YouTube.

Only a small minority of the comments were removed after being flagged to the company via YouTube’s ‘report content’ system. The comments and their associated accounts were only removed after the BBC contacted YouTube via press channels, it said.

While The Times reported finding adverts by major brands being also shown alongside videos depicting children in various states of undress and accompanied by obscene comments.

Brands freezing their YouTube advertising over the issue include Adidas, Deutsche Bank, Mars, Cadburys and Lidl, according to The Guardian.

Responding to the issues being raised a YouTube spokesperson said it’s working on an urgent fix — and told us that ads should not have been running alongside this type of content.

“There shouldn’t be any ads running on this content and we are working urgently to fix this. Over the past year, we have been working to ensure that YouTube is a safe place for brands. While we have made significant changes in product, policy, enforcement and controls, we will continue to improve,” said the spokesperson.

Also today, BuzzFeed reported that a pedophilic autofill search suggestion was appearing on YouTube over the weekend if the phrase “how to have” was typed into the search box.

On this, the YouTube spokesperson added: “Earlier today our teams were alerted to this profoundly disturbing autocomplete result and we worked to quickly remove it as soon as we were made aware. We are investigating this matter to determine what was behind the appearance of this autocompletion.”

Earlier this year scores of brands pulled advertising from YouTube over concerns ads were being displayed alongside offensive and extremist content, including ISIS propaganda and anti-semitic hate speech.

Google responded by beefing up YouTube’s ad policies and enforcement efforts, and by giving advertisers new controls that it said would make it easier for brands to exclude “higher risk content and fine-tune where they want their ads to appear”.

In the summer it also made another change in response to content criticism — announcing it was removing the ability for makers of “hateful” content to monetize via its baked in ad network, pulling ads from being displayed alongside content that “promotes discrimination or disparages or humiliates an individual or group of people”.

At the same time it said it would bar ads from videos that involve family entertainment characters engaging in inappropriate or offensive behavior.

This month further criticism was leveled at the company over the latter issue, after a writer’s Medium post shone a critical spotlight on the scale of the problem. And last week YouTube announced another tightening of the rules around content aimed at children — including saying it would beef up comment moderation on videos aimed at kids, and that videos found to have inappropriate comments about children would have comments turned off altogether.

But it looks like this new tougher stance over offensive comments aimed at kids was not yet being enforced at the time of the media investigations.

The BBC said the problem with YouTube’s comment moderation system failing to remove obscene comments targeting children was brought to its attention by volunteer moderators participating in YouTube’s (unpaid) Trusted Flagger program.

Over a period of “several weeks” it said that five of the 28 obscene comments it had found and reported via YouTube’s ‘flag for review’ system were deleted. However no action was taken against the remaining 23 — until it contacted YouTube as the BBC and provided a full list. At that point it says all of the “predatory accounts” were closed within 24 hours.

It also cited sources with knowledge of YouTube’s content moderation systems who claim associated links can be inadvertently stripped out of content reports submitted by members of the public — meaning YouTube employees who review reports may be unable to determine which specific comments are being flagged.

Although they would still be able to identify the account being associated with the comments.

The BBC also reported criticism directed at YouTube by members of its Trusted Flaggers program, saying they don’t feel adequately supported and arguing the company could be doing much more.

“We don’t have access to the tools, technologies and resources a company like YouTube has or could potentially deploy,” it was told. “So for example any tools we need, we create ourselves.

“There are loads of things YouTube could be doing to reduce this sort of activity, fixing the reporting system to start with. But for example, we can’t prevent predators from creating another account and have no indication when they do so we can take action.”

Google does not disclose exactly how many people it employs to review content — reporting only that “thousands” of people at Google and YouTube are involved in reviewing and taking action on content and comments identified by its systems or flagged by user reports.

These human moderators are also used to train and develop in-house machine learning systems that are also used for content review. But while tech companies have been quick to try to use AI engineering solution to fix content moderation, Facebook CEO Mark Zuckerberg himself has said that context remains a hard problem for AI to solve.

Highly effective automated comment moderation systems simply do not yet exist. And ultimately what’s needed is far more human review to plug the gap. Albeit that would be a massive expense for tech platforms like YouTube and Facebook that are hosting (and monetizing) user generated content at such vast scale.

But with content moderation issues continuing to rise up the political agenda, not to mention causing recurring problems with advertisers, tech giants may find themselves being forced to direct a lot more of their resources towards scrubbing problems lurking in the darker corners of their platforms.

Featured Image: nevodka/iStock Editorial