All posts in “Propaganda”

Google shuts YouTube channel implicated in Kremlin political propaganda ops


A YouTube channel that had been implicated in Russia disinformation operations to target the U.S. 2016 election has been taken down by Google.

Earlier this week The Daily Beast claimed the channel, run by two black video bloggers calling themselves Williams and Kalvin Johnson, was part of Russian disinformation operations — saying this had been confirmed to it by investigators examining how social media platforms had been utilized in a broad campaign by Russia to try to influence US politics.

The two vloggers apparently had multiple social media accounts on other platforms. And their content was pulled from Facebook back in August after being identified as Russian-backed propaganda, according to the Daily Beast’s sources.

Videos posted to the YouTube channel, which was live until earlier this week, apparently focused on criticizing and abusing Hillary Clinton, including accusing her of being a racist as well as spreading various conspiracy theories about the Clintons, along with pro-Trump commentary.

The content appeared intended for an African American audience, although the videos did not gain significant traction on YouTube, according to The Daily Beast, which said they had only garnered “hundreds” of views prior to the channel being closed (vs the pair’s Facebook page having ~48,000 fans before it was closed, and videos uploaded there racking up “thousands” of views).

A Google spokesman ignored the specific questions we put to it about the YouTube channel, sending only this generic statement: “All videos uploaded to YouTube must comply with our Community Guidelines and we routinely remove videos flagged by our community that violate those policies. We also terminate the accounts of users who repeatedly violate our Guidelines or Terms of Service.”

So while the company appears to be confirming it took the channel down it’s not providing a specific reason beyond TOS violations at this stage. (And the offensive nature of the content offers more than enough justification for Google to shutter the channel.)

However, earlier this week the Washington Post reported that Google had uncovered evidence that Russian operatives spent money buying ads on its platform in an attempt to interfere in the 2016 U.S. election, citing people familiar with the investigation.

The New York Times also reported that Google has found accounts believed to be associated wth the Russian government — claiming Kremlin agents purchased $4,700 worth of search ads and more traditional display ads. It also said the company has found a separate $53,000 worth of ads with political material that were purchased from Russian internet addresses, building addresses or with Russian currency — though the newspaper’s source said it’s not clear whether the latter spend was definitively associated with the Russian government.

Google has yet to publicly confirm any of these reports. Though it has not denied them either. Its statement so far has been that: “We are taking a deeper look to investigate attempts to abuse our systems, working with researchers and other companies, and will provide assistance to ongoing inquiries.”

The company has been called to testify to a Senate Intelligence Committee on November 1, along with Facebook, and Twitter. The committee is examining how social media platforms may have been used by foreign actors to influence the 2016 US election.

Last month Facebook confirmed Russian agents had utilized its platform in an apparent attempt to sew social division across the U.S. — revealing it had found purchases worth around $100,000 in targeted advertising or some 3,000+ ads.

Twitter has also confirmed finding some evidence of Russian interference in the 2016 US election on its platform.

The wider question for all these user generated content platforms is how their stated preference for free speech (and hands off moderation) can co-exist with weaponized disinformation campaigns conducted by hostile foreign entities with apparently unfettered access to their platforms — especially given the disinformation does not appear limited to adverts, with content itself also being implicated (including, apparently, people being paid to create and post political disinformation).

User generated content platforms have not historically sold themselves on the pro quality of content they make available. Rather their USP has been the authenticity of the voices they offer access to (though it’s also fair to say they offer a conglomerate mix). But the question is what happens if social media users start to view that mix with increasing mistrust — as something that might be being deliberately adulterated or infiltrated by malicious elements?

The tech platforms’ lack of a stated editorial agenda of their own could result in the perception that the content they surface is biased anyway — and in ways many people might equally view with mistrust. The risk is the tech starts to looks like a fake news toolkit for mass manipulation.

Google to ramp up AI efforts to ID extremism on YouTube


Last week Facebook solicited help with what it dubbed “hard questions” — including how it should tackle the spread of terrorism propaganda on its platform.

Yesterday Google followed suit with its own public pronouncement, via an op-ed in the FT newspaper, explaining how it’s ramping up measures to tackle extremist content.

Both companies have been coming under increasing political pressure in Europe especially to do more to quash extremist content — with politicians including in the UK and Germany pointing the finger of blame at platforms such as YouTube for hosting hate speech and extremist content.

Europe has suffered a spate of terror attacks in recent years, with four in the UK alone since March. And governments in the UK and France are currently considering whether to introduce a new liability for tech platforms that fail to promptly remove terrorist content — arguing that terrorists are being radicalized with the help of such content.

Earlier this month the UK’s prime minister also called for international agreements between allied, democratic governments to “regulate cyberspace to prevent the spread of extremism and terrorist planning”.

While in Germany a proposal that includes big fines for social media firms that fail to take down hate speech has already gained government backing.

Besides the threat of fines being cast into law, there’s an additional commercial incentive for Google after YouTube faced an advertiser backlash earlier this year related to ads being displayed alongside extremist content, with several companies pulling their ads from the platform.

Google subsequently updated the platform’s guidelines to stop ads being served to controversial content, including videos containing “hateful content” and “incendiary and demeaning content” so their makers could no longer monetize the content via Google’s ad network. Although the company still needs to be able to identify such content for this measure to be successful.

Rather than requesting ideas for combating the spread of extremist content, as Facebook did last week, Google is simply stating what its plan of action is — detailing four additional steps it says it’s going to take, and conceding that more action is needed to limit the spread of violent extremism.

“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now,” writes Kent Walker, general counsel 

The four additional steps Walker lists are:

  1. increased use of machine learning technology to try to automatically identify “extremist and terrorism-related videos” — though the company cautions this “can be challenging”, pointing out that news networks can also broadcast terror attack videos, for example.”We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content,” writes Walker
  2. more independent (human) experts in YouTube’s Trusted Flagger program — aka people in the YouTube community who have a high accuracy rate for flagging problem content. Google says it will add 50 “expert NGOs”, in areas such as hate speech, self-harm and terrorism, to the existing list of 63 organizations that are already involved with flagging content, and will be offering “operational grants” to support them. It is also going to work with more counter-extremist groups to try to identify content that may be being used to radicalize and recruit extremists.
    “Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern,” writes Walker.
  3. a tougher stance on controversial videos that do clearly violate YouTube’s community guidelines — including by adding interstitial warnings to videos that contain inflammatory religious or supremacist content. Googles notes these videos also “will not be monetised, recommended or eligible for comments or user endorsements” — idea being they will have less engagement and be harder to find. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” writes Walker.
  4. expanding counter-radicalisation efforts by working with (other Alphabet division) Jigsaw to implement the “Redirect Method” more broadly across Europe. “This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages,” says Walker.

Despite increasing political pressure over extremism — and the attendant bad PR (not to mention threat of big fines) — Google is evidently hoping to retain its torch-bearing stance as a supporter of free speech by continuing to host controversial hate speech on its platform, just in a way that means it can’t be directly accused of providing violent individuals with a revenue stream. (Assuming it’s able to correctly identify all the problem content, of course.)

Whether this compromise will please either side on the ‘remove hate speech’ vs ‘retain free speech’ debate remains to be seen. The risk is it will please neither demographic.

The success of the approach will also stand or fall on how quickly and accurately Google is able to identify content deemed a problem — and policing user-generated content at such scale is a very hard problem.

It’s not clear exactly how many thousands of content reviewers Google employs at this point — we’ve asked and will update this post with any response.

Facebook recently added an additional 3,000 to its headcount, bringing the total number of reviewers to 7,500. CEO Mark Zuckerberg also wants to apply AI to the content identification issue but has previously said it’s unlikely to be able to do this successfully for “many years”.

Touching on what Google has been doing already to tackle extremist content, i.e. prior to these additional measures, Walker writes: “We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.”

Of course Russian hackers were behind #SyriaHoax

A video grabbed still image shows Syrian people receiving treatment after a chemical attack at a field hospital in Saraqib, Idlib province, northern Syria.
A video grabbed still image shows Syrian people receiving treatment after a chemical attack at a field hospital in Saraqib, Idlib province, northern Syria.

Image: STRINGER/EPA/REX/Shutterstock

A conspiracy theory about whether a chemical weapons attack was actually perpetrated by Syrian President Bashar al-Assad started making the rounds shortly after that tragic incident shocked the world. 

Surprise, surprise, now we know Russian social media posts fueled that conspiracy, according to ABC.

The conspiracy started with a pro-Assad website in Lebanon.

Al-Masdar News published a story about supposed problems in the evidence that Assad’s government was the group behind the chemical attack, even though the governments of many nations have little doubt about the perpetrator. 

As many pro-Assad folks do, the website attacked a Syrian rescue group known as The White Helmets. The group — known for their headgear and their willingness to save civilians after government attacks collapse buildings — provided evidence that the chemical attack was perpetrated by Assad’s forces. But the article derided them as “al-Qaeda-affiliated,” and concluded that the chemical weapons attack allegation wasn’t true.

That’s when Russian social media accounts picked up the thread, and it didn’t take long before alt-right and conspiracy theorist guru Mike Cernovich found the story and promoted it with the hashtag, #SyriaHoax. Soon, that hashtag was trending across the United States. 

Cernovich latching onto the story should tell the rest of us that its credibility is pretty damn questionable. The man is also a #pizzagate truther, which means he claims to believe Hillary Clinton’s campaign ran a child sex ring out of a pizza place in Washington, D.C. Cernovich is also a rape apologist, and has said Clinton has Parkinson’s disease.

Though analysts didn’t tie the promotion of #SyriaHoax to the Russian government, the Kremlin is skilled in the art of propaganda. Witnesses at a recent Senate hearing demonstrated how the Russian government promoted false news stories during the United States’ 2016 election and systematically leaked information about candidates that they obtained by hacking into the emails of campaign officials. 

This misinformation campaign would only be the latest. 

WATCH: ‘Homeland’ star mastered WhatsApp to reconnect with refugee family he helped rescue