All posts in “Youtube”

Facebook, Twitter, YouTube praised for “steady progress” quashing illegal hate speech in Europe

Facebook, Twitter and YouTube are likely to be breathing a little easier in Europe after getting a pat on the back from regional lawmakers for making “steady progress” on removing illegal hate speech.

Last week the European Commission warned it could still draw up legislation to try to ensure illegal content is removed from online platforms if tech firms do not step up their efforts.

Germany has already done so for, implementing a regime of fines of up to €50M for social media firms that fail to promptly remove illegal hate speech, though the EC is generally eyeing a wider mix of illegal content when it talks tough on this topic — including terrorist propaganda and even copyrighted material.

Today, on the specific issue of illegal hate speech on social media, it was sounding happy with the current voluntary approach. It also announced that two more social media platforms — Instagram and Google+ — have joined the program.

In 2016 Facebook, Twitter, YouTube and Microsoft signed up to a regional Code of Conduct on illegal hate speech, committing to review the majority of reported hate speech within 24 hours and — for valid reports — remove posts within that timeframe too.

The Commission has been monitoring their progress on social media hate speech, specifically to see whether they are living up to what they agreed in the Code of Conduct.

Today it gave the findings from its third review — reporting that the companies are removing 70 per cent of notified illegal hate speech on average, up from 59 per cent in the second evaluation, and 28 per cent when their performance was first assessed in 2016.

Last year, Facebook and YouTube announced big boosts to the number of staff dealing with safety and content moderation issues on their platforms, following a series of content scandals and a cranking up of political pressure (which, despite the Commission giving a good report now, has not let up in every EU Member State).

Also under fire over hate speech on its platform last year, Twitter broadened its policies around hateful conduct and abusive behavior — enforcing the more expansive policies from December.

Asked during a press conference whether the EC would now be less likely to propose hate speech legislation for social media platforms, Justice, Consumers and Gender Equality commissioner Věra Jourová replied in the affirmative.

“Yes,” she said. “Now I see this as more probable that we will propose — also to the ministers of justice and all the stakeholders and within the Commission — that we want to continue this [voluntary] approach.”

Though the commissioner also emphasized she was not talking about other types of censured online content, such as terrorist propaganda and fake news. (On the latter, for instance, France’s president said last month he will introduce an anti-fake news election law aimed at combating malicious disinformation campaigns.)

“With the wider aspects of platforms… we are looking at coming forward with more specific steps which could be taken to tighten up the response to all types of illegal content before the Commission reaches a decision on whether legislation will be required,” Jourová added.

She noted that some Member States’ justice ministers are open to a new EU-level law on social media and hate speech — in the event they judge the voluntary approach to have failed — but said other ministers take a ‘hands off’ view on the issue.

“Having these quite positive results of this third assessment I will be stronger in promoting my view that we should continue the way of doing this through the Code of Conduct,” she added.

While she said she was pleased with progress made by the tech firms, Jourová flagged up feedback as an area that still needs work.

“I want to congratulate the four companies for fulfilling their main commitments. On the other hand I urge them to keep improving their feedback to users on how they handle illegal content,” she said, calling again for “more transparency” on that.

“My main idea was to make these platforms more responsible,” she added of the Code. “The experience with the big Internet players was that they were very aware of their powers but did not necessarily grasp their responsibilities.

“The Code of Conduct is a tool to enforce the existing law in Europe against racism and xenophobia. In their everyday business, companies, citizens, everyone has to make sure they respect the law — they do not need a court order to do so.

“Let me make one thing very clear, the time of fast moving, disturbing companies such as Google, Facebook or Amazon growing without any supervision or control comes to an end.”

In all, for the EC’s monitoring exercise, 2,982 notifications of illegal hate speech were submitted to the tech firms in 27 EU Member during a six-week period in November and December last year, split between reporting channels that are available to general users and specific channels available only to trusted flaggers/reporters.

In 81.7% of the cases the exercise found that the social media firms assessed notifications in less than 24 hours; in 10% in less than 48 hours; in 4.8% in less than a week; and in 3.5% it took more than a week.

Performance varied across the companies with Facebook achieving the best results — assessing the notifications in less than 24 hours in 89.3% of the cases and 9.7% in less
than 48 hours — followed by Twitter (80.2% and 10.4% respectively), and lastly YouTube (62.7% and 10.6%).

Twitter was found to have made the biggest improvement on notification review, having only achieved 39% of cases reviewed within a day as of May 2017.

In terms of removals, Facebook removed 79.8% of the content, YouTube 75% and Twitter 45.7%. Facebook also received the largest amount of notifications (1 408), followed by Twitter (794) and YouTube (780). Microsoft did not receive any.

According to the EC’s assessment, the most frequently reported grounds for hate speech are ethnic origin, anti-Muslim hatred and xenophobia.

Acknowledging the challenges that are inherent in judging whether something constitutes illegal hate speech or not, Jourová said the Commission does not have a target of 100% removals on illegal hate speech reports — given the “difficult work” that tech firms have to do in evaluating certain reports.

Illegal hate speech in Europe is defined as hate speech that has the potential to incite violence.

“They have to take into consideration the nature of the message and its potential impact on the behavior of the society,” she noted. “We do not have the goal of 100% because there are those edge cases. And… in case of doubt we should have the messages remain online because the basic position is that we protect the freedom of expression. That’s the baseline.”

The problem with human moderators

If Big Tech in 2018 already has a theme, it’s that social networks are passive platforms no longer. Since the new year, both Facebook and YouTube have stepped up with new guidelines and processes to manage — and in some cases police — content on their networks.

All of this started well before the new year, of course. Twitter has been following through on a lengthy project to both clarify its content policies and take a more active role in saying who and what is allowed on its platform, most recently with its so-called “Nazi purge.” The current trend arguably started with Reddit, when then-CEO Ellen Pao pushed for tighter control of harassment and revenge porn on the site.

This digital reckoning now feels inevitable, but it was hastened by events over the last year. Anger at the big networks reached a crescendo last year after Facebook — the most influential of the bunch — was widely criticized for hosting fake news and politically charged ads with virtually no oversight. But while the old system of letting algorithms sort things out was clearly flawed, the networks’ re-assertion of the role of gatekeepers is worrisome, too.

In the case of YouTube, the changes, announced yesterday, mostly involve demonetizing (that is, removing the ads from) videos from creators under a certain view time or subscriber threshold, which sounds fine. However, what the relatively clinical blog post doesn’t discuss is the new way YouTube will deal with big partner accounts: Human moderators will review their content — all of it — turning off monetization on any specific video they may find objectionable.

Coming in the wake of Logan Paul’s infamous visit to Japan’s suicide forest and his subsequent, numerous apologies, it seems clear this introduction of human moderators is intended to head off incidents exactly like that. Presumably, if this system had been in place then, the moderator would have raised a hand and said, “Uh, guys…?”

Let’s be clear about what we’re talking about here: Demonetizing isn’t the same thing as deleting. This isn’t censorship per se, though it is sending a message to creators about what content is acceptable and what isn’t. The thinking is that, over time, YouTube creators will post less of the demonetized stuff and more videos that “contribute positively to the community,” in the words of YouTube’s Neal Mohan and Robert Kyncl.

It is sending a message to creators about what content is acceptable and what isn’t.

Isn’t that a good thing? Maybe, but if it were as simple as enforcing YouTube’s community guidelines, a bot could do it, and we already know that doesn’t work. With humans involved, it raises a different set of questions: Who are these humans? What qualifications or biases do they have? And what exactly raises a red flag in their minds?

The answer to that last question will likely vary depending on the answers to the first two. It doesn’t help that most terms of service and community guidelines are purposely vague to give moderators wiggle room. In the case of Twitter, which used to have an unofficial label as “the free speech wing of the free speech party,” the policies have even been contradictory, and the network itself has sometimes appeared unsure why certain tweets are flagged, accounts suspended, or verification stripped.

This isn’t a case for zero censorship. There are things virtually everyone would agree shouldn’t be on a network as popular and public-facing as Facebook or YouTube. Neo-Nazis spouting hateful ideology, graphic depictions of violence, direct threats — they all need to go.

But audiences have been clamoring for more content policing beyond just the most extreme. And by and large, the networks have acquiesced to the demand, staffing up to review more content by hand since algorithms can only do so much. But the companies are only as good as the humans they hire, and the job of content moderator is largely a thankless one — the daily slog of viewing vast amounts of objectionable content has a psychological toll attached.

The companies are only as good as the humans they hire.

Historically, the big tech companies haven’t been good at human intervention. In 2016, human moderators at Facebook were accused of purging conservative news from its trending topics section. It also removed a historic photo from the Vietnam war that same year, justified its decision, then reversed it. Twitter’s CEO has basically admitted its enforcement policies have been a mess. Even Google, generally thought to be the most algorithmically driven of the bunch isn’t immune from human failings: Back in 2012, it tried to challenge Facebook’s social media dominance with Google+, its own social network, and deliberately put companies’ Google+ pages higher than their Facebook pages in search results.

Put simply: We shouldn’t trust Twitter, Facebook, Google, YouTube, or any other private tech company to create a system that consistently punishes bad actors based on a common standard. Humans are driven by biases. Systems can help correct for those biases, but we can’t judge without knowing what those systems are. And if they don’t work as intended, that could leave us with a worse problem than when we started: turning each network into its own massive filter bubble, where anything deemed offensive is purged.

Every platform is now a content cop on the beat. Maybe they always were, but when you make loud public statements that you’re going to start more actively policing content, it means more calls to the police. That’s going to inevitably mean more users getting kicked out of these networks, some as big as Logan Paul.

On the surface, that may feel OK. YouTube can afford to lose a few big personalities, and maybe it should. The more difficult question is what do such actions say about the network? When users are punished for offensive content, what do those users’ sympathizers and supporters think — those who might not agree with inconsistent applications of poorly worded policies? How do they start to think of YouTube (and Twitter, Facebook, etc.)? How do they express their uneasiness, and where do they start spending their time?

I don’t know the answers to those questions. But I do know simple math: The more things you push out of a bubble, the smaller it’ll get. And you might not like what forms alongside it. 6427 cfdc%2fthumb%2f00001

YouTube is pulling Tide Pod Challenge videos

People doing stupid stuff on the Internet is hardly news. To wit: The Tide Pod Challenge, in which YouTubers have been filming themselves eating — or, we really hope, pretending to eat — laundry detergent pods.

Why? Uh, because they’re brightly colored?? We guess???????

Obviously this is Darwin Awards’ levels of idiocy — given that detergent is, y’know, not at all edible, toxic to biological life and a potent skin irritant. It would also literally taste of soap. Truly, one wonders what social historians will make of the 21st century.

But while eating Tide Pods appears to have started as a silly meme — which now has its own long and rich history — once YouTubers got hold of it, well, things started to turn from funny fantasy to toxic reality.

Funny that.

So now YouTube appears to be trying to get ahead of any wider societal outcry over (yet more) algorithmically accelerated idiocy on its platform — i.e. when sane people realize kids have been filming themselves eating detergent just to try to go viral on YouTube — and is removing Tide Pod Challenge videos.

At least when they have been reported.

A YouTube spokesperson sent us the following statement on this: “YouTube’s Community Guidelines prohibit content that’s intended to encourage dangerous activities that have an inherent risk of physical harm. We work to quickly remove flagged videos that violate our policies.”

Under YouTube’s policy channels that have a video removed on such grounds will get a strike — and if they get too many strikes could face having their channel suspended.

At the time of writing it’s still possible to find Tide Pod Challenge videos on YouTube, though most of the videos being surfaced seem to be denouncing the stupidity of the ‘challenge’ (even if they have clickbait-y titles that claim they’re going to eat the pods — hey, savvy YouTubers know a good viral backlash bandwagon to jump on when they see one!).

Other videos that we found — still critical of the challenge but which include actual footage of people biting into Tide Pods — require sign in for age verification and are also gated behind a warning message that the content “may be inappropriate for some users”.

As we understand it, videos that discuss the Tide Pod challenge in a news setting or educational/documentary fashion are still allowed — although it’s not clear where exactly YouTube moderators are drawing the tonal line.

Fast Company reports that YouTube clamping down on Tide Pod Challenge videos is in response to pressure from the detergent brand’s parent company, Procter & Gamble — which has said it is working with “leading social media sites” to encourage the removal of videos that violate their polices.

Because, strangely enough, Procter & Gamble is not ecstatic that people have been trying to eat its laundry pods…

And while removal of videos that encourage dangerous activities is not a new policy on YouTube’s part, YouTube taking a more pro-active approach to enforcement of its own policies is clearly the name of the game for the platform these days.

That’s because a series of YouTube content scandals blew up last year — triggering advertisers to start pulling their dollars off of the platform, including after marketing messages were shown being displayed alongside hateful and/or obscene content.

YouTube responded to the ad boycott by saying it would given brands more control over where their ads appeared. It also started demonitizing certain types of videos.

There was also a spike in concern last year about the kinds of videos children were being exposed to on YouTube — and indeed the kinds of activities YouTubers were exposing their children to in their efforts to catch the algorithm’s eye — which also led the company to tighten its rules and enforcement.

YouTube is also increasingly in politicians’ crosshairs for algorithmically accelerating extremism — and it made a policy shift last year to also remove non-violent content made by listed terrorists.

It remains under rising political pressure to come up with technical solutions for limiting the spread of hate speech and other illegal content — with European Union lawmakers warning platforms last month they could look to legislate if tech giants don’t get better at moderating content themselves.

At the end of last year YouTube said it would be increasing its content moderation and other enforcement staff to 10,000 in 2018, as it sought to get on top of all the content criticism.

The long and short of all this is that user generated content is increasing under the spotlight and some of the things YouTubers have been showing and doing to gain views by ‘pleasing the algorithm’ have turned out to be rather less pleasing for YouTube the company.

As one YouTuber abruptly facing demonitization of his channel — which included videos of his children doing things like being terrified at flu jabs or crying over dead pets — told Buzzfeed last year: “The [YouTube] algorithm is the thing we had a relationship with since the beginning. That’s what got us out there and popular. We learned to fuel it and do whatever it took to please the algorithm.”

Another truly terrible example of the YouTuber quest for viral views occurred at the start of this year, when YouTube ‘star’, Logan Paul — whose influencer status had earned him a position in Google’s Preferred ad program — filmed himself laughing beside the dead body of a suicide victim in Japan.

It gets worse: This video had actually been manually approved by YouTube moderators, going on to rack up millions of views and appearing in the top trending section on the platform — before Paul himself took it down in the face of widespread outrage.

In response to that, earlier this week YouTube announced yet another tightening of its rules, around creator monetization and partnerships — saying content on its Preferred Program would be “the most vetted”.

Last month it also dropped Paul from the partner program.

Compared to that YouTube-specific scandal, the Tide Pod Challenge looks like a mere irritant.

Featured Image: nevodka/iStock Editorial

YouTube will try to prevent the next Logan Paul fiasco by cutting off the cash

YouTube sees one central element to its problems: money. 

On Tuesday the company announced changes to how videos on the platform make money, adding in a heavy dose of human moderation and new tools to make sure advertising cash gets put toward the right kinds of videos — and never hate speech, child exploitation, and other questionable content. 

Yes, that would conceivably include videos like the one Logan Paul recently uploaded featuring the body of a person who had recently committed suicide. 

YouTube “will be strengthening our requirements for monetization so spammers, impersonators, and other bad actors can’t hurt our ecosystem or take advantage of you, while continuing to reward those who make our platform great,” wrote Neal Mohan, Chief Product Officer, and Robert Kyncl, Chief Business Officer, in a blog post.

The changes come after months of YouTube weathering advertiser unrest and public criticism for the videos it hosts — and the ads it plays against them. The company has shown ads for major brands next to videos depicting hate speech. It’s also hosted disturbing cartoons in its YouTube Kids section, and allowed Logan Paul’s troubling video to show ads and reach its “Trending” section.

The changes primarily address smaller channels. YouTube is making it tougher for creators to become part of its partner program, which allows videos to be monetized. A channel will now need to have accrued 1,000 subscribers and 4,000 hours of watch time over the past 12 months to gain access to the program. Otherwise, their videos won’t be eligible to make money.

YouTube stressed that the vast majority of channels that will be cut out by this change didn’t make much money, with 99 percent earning less than $100 last year. 

“After thoughtful consideration, we believe these are necessary compromises to protect our community,” Mohan and Kyncl wrote.

There are also changes for big creators. Human moderators will look at every single video that is part of its Google Preferred program, i.e., the high-end video ad units that it offers up to brands.

Under this system, Paul’s video would have been reviewed by a human moderator, who would have conceivably flagged it as not being eligible for Google Preferred ads due to its disturbing content.

This theoretically removes the monetary incentive for creators to constantly push the envelope with extreme video, while also giving marketers more assurances that their ads won’t run against dark or disturbing content.

Aside from cutting off the money, YouTube has yet to announce any efforts to better regulate the content itself, but added that it will be talking to creators to figure out a way forward.

Mohan and Kyncl seemed to allude to the Paul situation in their blog post.

“While this change will tackle the potential abuse of a large but disparate group of smaller channels, we also know that the bad action of a single, large channel can also have an impact on the community and how advertisers view YouTube. We’ll be working to schedule conversations with our creators in the months ahead so we can hear your thoughts and ideas and what more we can do to tackle that challenge,” they wrote.

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f84139%2f3c97e3f5 d856 4d4d b40b d872b68c2e62

YouTube’s in-app messaging and Community tab to make their way to YouTube TV, YouTube Music

YouTube is aiming to bring its set of social features, including the in-app messaging system and “Community” tab for creators, to its wider suite of apps. Specifically, the company is interested in porting those features to its YouTube TV app aimed at cord cutters, as well as its Music app.

The company won’t confirm a timeline in terms of when these features would launch, but it’s something that’s clearly at the forefront of YouTube’s product strategy.

The move would help further differentiate YouTube’s over-the-top streaming service, YouTube TV, from competitors like Sling TV, Hulu Live TV, PlayStation Vue and others. And it would allow the company to leverage its strengths in social features to build out a larger platform that spans both web and mobile properties in order create a large, combined user base of people who stream media content on their devices.

“People think about YouTube as this place where you play your favorite video content — and of course it is. But really what it is underneath is this kind of community that exists underneath between content creators and fans; and fans and fans,” said YouTube Chief Product Officer, Neal Mohan, in a conversation at CES where he talked about how he sees the potential for adding social features to more YouTube products.

“We think that magic of YouTube that exists in the main experience can apply to YouTube TV experience as well,” he said.

For example, YouTube’s in-app video sharing and messaging feature, launched back in summer 2017, offers a way for friends to share videos and their reactions without having to leave the YouTube app to use another mobile messaging service.

Mohan says this is the sort of feature that would make sense to bring to YouTube TV — or even YouTube’s Music app — in the future.

When added to YouTube TV, the messaging feature would basically look the same as it does today in YouTube’s mobile app.

“I don’t want to create any additional cognitive load for users — every user of the YouTube TV app is probably also a YouTube user. It should feel familiar. Their friends are the same,” said Mohan.

He added that the idea of porting social features from YouTube to YouTube TV makes sense for the community features YouTube has been building, as well, such as the new Community tab where creators can interact with fans.

“That’s where content creators are posting not just video, but images, text, and polls and just interacting with their community. I think that’s a concept that can apply regardless of the type of content,” Mohan said.

In practice, this could mean that TV content creators would have their own tab to engage their fan base directly in the app where you’re consuming their content.

This isn’t something only YouTube is planning, of course. Hulu this week said that it was also developing social features to better highlight what friends are watching and recommending, as well as those that could offer a co-viewing experience. (YouTube, meanwhile, has been testing co-watching in an app called Uptime, developed within Google’s internal R&D division, Area 120.)

Philo, a new low-cost, sports-free streaming service, also has social features in development that it plans to launch this year.

In other words, adding a social layer to the TV watching experience may not be a differentiator for YouTube long-term, but that doesn’t mean it won’t have an advantage in this space.

“YouTube is well-positioned to deliver those types of really interesting use cases to our consumers,” said Mohan.

That is, YouTube has always been a social community of sorts — it’s just that, now, that community is being better surfaced through features like the new tab for creator-to-fan interaction and in-app messaging.