All posts in “russia”

Report: Twitter deleted tweets related to the Russian investigation

Twitter has deleted tweets that could be helpful to investigators currently examining Russia’s suspected manipulation of the social network during the 2016 presidential election, U.S. government cybersecurity officials told Politico.

According to the officials, Twitter is either unable or unwilling to retrieve a “substantial amount” of tweets from bots and fake users spreading disinformation. Those users, which have been tied to Russia, have since deleted those tweets. 

The lost tweets are apparently casualties of Twitter’s privacy policy, which states that when an account deletes a tweet, it will be permanently deleted from Twitter’s servers after 30 days through an automated process. After the account holders deleted their tweets and the accounts, which are suspected of having spread false or exaggerated pro-Trump and anti-Clinton narratives, they were also removed from Twitter’s system — permanently deleted.

It turns out, that’s how Twitter is supposed to work. Twitter’s guidelines for law enforcement merely state, “Content deleted by account holders (e.g., tweets) is generally not available.” 

A Twitter spokesperson told Mashable that Twitter has “strong policies in place to protect the privacy of our users.” The company declined to comment on the specific deletion policy. 

Historically, Twitter has been accused of being less than fully forthcoming with federal investigators. At its recent Senate briefing, Virginia senator Mark Warner, the ranking Democrat on the U.S. Senate Intelligence Committee, called the company’s presentation “frankly inadequate on every level.” 

Twitter sees things differently: “We have committed to working with committee investigators to address their questions to the best of our ability,” a company spokesperson told Mashable

The company declined to comment on whether it is attempting to retrieve the deleted tweets, or whether it will present them to investigators if retrieved. 

With access to all of the tweets from those accounts, the investigators might be better able to construct a timeline of events and figure out the account holders’ goals. But, depending on Twitter’s ability to reconstruct its own past, those tweets may be gone forever. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f82232%2f517b80db 8ea7 4318 9d1b bbac0c1e4bf1

Google shuts YouTube channel implicated in Kremlin political propaganda ops


A YouTube channel that had been implicated in Russia disinformation operations to target the U.S. 2016 election has been taken down by Google.

Earlier this week The Daily Beast claimed the channel, run by two black video bloggers calling themselves Williams and Kalvin Johnson, was part of Russian disinformation operations — saying this had been confirmed to it by investigators examining how social media platforms had been utilized in a broad campaign by Russia to try to influence US politics.

The two vloggers apparently had multiple social media accounts on other platforms. And their content was pulled from Facebook back in August after being identified as Russian-backed propaganda, according to the Daily Beast’s sources.

Videos posted to the YouTube channel, which was live until earlier this week, apparently focused on criticizing and abusing Hillary Clinton, including accusing her of being a racist as well as spreading various conspiracy theories about the Clintons, along with pro-Trump commentary.

The content appeared intended for an African American audience, although the videos did not gain significant traction on YouTube, according to The Daily Beast, which said they had only garnered “hundreds” of views prior to the channel being closed (vs the pair’s Facebook page having ~48,000 fans before it was closed, and videos uploaded there racking up “thousands” of views).

A Google spokesman ignored the specific questions we put to it about the YouTube channel, sending only this generic statement: “All videos uploaded to YouTube must comply with our Community Guidelines and we routinely remove videos flagged by our community that violate those policies. We also terminate the accounts of users who repeatedly violate our Guidelines or Terms of Service.”

So while the company appears to be confirming it took the channel down it’s not providing a specific reason beyond TOS violations at this stage. (And the offensive nature of the content offers more than enough justification for Google to shutter the channel.)

However, earlier this week the Washington Post reported that Google had uncovered evidence that Russian operatives spent money buying ads on its platform in an attempt to interfere in the 2016 U.S. election, citing people familiar with the investigation.

The New York Times also reported that Google has found accounts believed to be associated wth the Russian government — claiming Kremlin agents purchased $4,700 worth of search ads and more traditional display ads. It also said the company has found a separate $53,000 worth of ads with political material that were purchased from Russian internet addresses, building addresses or with Russian currency — though the newspaper’s source said it’s not clear whether the latter spend was definitively associated with the Russian government.

Google has yet to publicly confirm any of these reports. Though it has not denied them either. Its statement so far has been that: “We are taking a deeper look to investigate attempts to abuse our systems, working with researchers and other companies, and will provide assistance to ongoing inquiries.”

The company has been called to testify to a Senate Intelligence Committee on November 1, along with Facebook, and Twitter. The committee is examining how social media platforms may have been used by foreign actors to influence the 2016 US election.

Last month Facebook confirmed Russian agents had utilized its platform in an apparent attempt to sew social division across the U.S. — revealing it had found purchases worth around $100,000 in targeted advertising or some 3,000+ ads.

Twitter has also confirmed finding some evidence of Russian interference in the 2016 US election on its platform.

The wider question for all these user generated content platforms is how their stated preference for free speech (and hands off moderation) can co-exist with weaponized disinformation campaigns conducted by hostile foreign entities with apparently unfettered access to their platforms — especially given the disinformation does not appear limited to adverts, with content itself also being implicated (including, apparently, people being paid to create and post political disinformation).

User generated content platforms have not historically sold themselves on the pro quality of content they make available. Rather their USP has been the authenticity of the voices they offer access to (though it’s also fair to say they offer a conglomerate mix). But the question is what happens if social media users start to view that mix with increasing mistrust — as something that might be being deliberately adulterated or infiltrated by malicious elements?

The tech platforms’ lack of a stated editorial agenda of their own could result in the perception that the content they surface is biased anyway — and in ways many people might equally view with mistrust. The risk is the tech starts to looks like a fake news toolkit for mass manipulation.

Google’s probe into Russian disinformation finds ad buys, report claims


Google has uncovered evidence that Russian operatives exploited its platforms in an attempt to interfere in the 2016 U.S. election, according to the Washington Post.

It says tens of thousands of dollars were spent on ads by Russian agents who were aiming to spread disinformation across Google’s products — including its video content platform YouTube but also via advertising associated with Google search, Gmail, and the company’s DoubleClick ad network.

The newspaper says its report is based on information provided by people familiar with Google’s investigation into whether Kremlin-affiliated entities sought to use its platforms to spread disinformation online.

Asked for confirmation of the report, a Google spokesman told us: “We have a set of strict ads policies including limits on political ad targeting and prohibitions on targeting based on race and religion. We are taking a deeper look to investigate attempts to abuse our systems, working with researchers and other companies, and will provide assistance to ongoing inquiries.”

So it’s telling that Google is not out-and-out denying the report — suggesting the company has indeed found something via its internal investigation, though isn’t ready to go public with whatever it’s unearthed as yet.

Google, Facebook, and Twitter have all been called to testify to a Senate Intelligence Committee on November 1 which is examining how social media platforms may have been used by foreign actors to influence the 2016 US election.

Last month Facebook confirmed Russian agents had utilized its platform in an apparent attempt to sew social division across the U.S. by purchasing $100,000 of targeted advertising (some 3,000+ ads — though the more pertinent question is how far Facebook’s platform organically spread the malicious content; Facebook has claimed only around 10M users saw the Russian ads, though others believe the actual figure is likely to be far higher.)

CEO Mark Zuckerberg has tried to get out ahead of the incoming political and regulatory tide by announcing, at the start of this month, that the company will make ad buys more transparent — even as the U.S. election agency is running a public consultation on whether to extend political ad disclosure rules to digital platforms.

(And, lest we forget, late last year he entirely dismissed the notion of Facebook influencing the election as “a pretty crazy idea” — words he’s since said he regrets.)

Safe to say, tech’s platform giants are now facing the political grilling of their lives, and on home soil, as well as the prospect of the kind of regulation they’ve always argued against finally being looped around them.

But perhaps their greatest potential danger is the risk of huge reputational damage if users learn to mistrust the information being algorithmically pushed at them — seeing instead something dubious that may even have actively malicious intent.

While much of the commentary around the US election social media probe has, thus far, focused on Facebook, all major tech platforms could well be implicated as paid aids for foreign entities trying to influence U.S. public opinion — or at least any/all whose business entails applying algorithms to order and distribute third party content at scale.

Just a few days ago, for instance, Facebook said it had found Russian ads on its photo sharing platform Instagram, too.

In Google’s case the company controls vastly powerful search ranking algorithms, as well as ordering user generated content on its massively popular video platform YouTube.

And late last year The Guardian suggested Google’s algorithmic search suggestions had been weaponized by an organized far right campaign — highlighting how its algorithms appeared to be promoting racist, nazi ideologies and misogyny in search results.

(Though criticism of tech platform algorithms being weaponized by fringe groups to drive skewed narratives into the mainstream dates back further still — such as to the #Gamergate fallout, in 2014, when we warned that popular online channels were being gamed to drive misogyny into the mainstream media and all over social media.)

Responding to The Guardian’s criticism of its algorithms last year, Google claimed: “Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs — as a company, we strongly value a diversity of perspectives, ideas and cultures.”

But it looks like the ability of tech giants to shrug off questions and concerns about their algorithmic operations — and how they may be being subverted by hostile entities — has drastically shrunk.

According to the Washington Post, the Russian buyers of Google ads do not appear to be from the same Kremlin-affiliated troll farm which bought ads on Facebook — which it suggests is a sign that the disinformation campaign could be “a much broader problem than Silicon Valley companies have unearthed so far”.

Late last month Twitter also said it had found hundreds of accounts linked to Russian operatives. And the newspaper’s sources claim that Google used developer access to Twitter’s firehose of historical tweet data to triangulate its own internal investigation into Kremlin ad buys — linking Russian Twitter accounts to accounts buying ads on its platform in order to identify malicious spend trickling into its own coffers.

A spokesman for Twitter declined to comment on this specific claim but pointed to a lengthy blog post it penned late last month — on “Russian Interference in 2016 US Election, Bots, & Misinformation”. In that Twitter disclosed that the RT (formerly Russia Today) news network spent almost $275,000 on U.S. ads on Twitter in 2016.

It also said that of the 450 accounts Facebook had shared as part of its review into Russian election interference Twitter had “concluded” that 22 had “corresponding accounts on Twitter” — which it also said had either been suspended (mostly) for spam or were suspended after being identified.

“Over the coming weeks and months, we’ll be rolling out several changes to the actions we take when we detect spammy or suspicious activity, including introducing new and escalating enforcements for suspicious logins, Tweets, and engagements, and shortening the amount of time suspicious accounts remain visible on Twitter while pending confirmation. These are not meant to be definitive solutions. We’ve been fighting against these issues for years, and as long as there are people trying to manipulate Twitter, we will be working hard to stop them,” Twitter added.

As in the case with the political (and sometimes commercial) pressure also being applied on tech platforms to speed up takedowns of online extremism, it seems logical that the platforms could improve internal efforts to thwart malicious use of their tools by sharing more information with each other.

In June Facebook, Microsoft, Google and Twitter collectively announced a new partnership aimed at reducing the accessibility of internet services to terrorists, for instance — dubbing it the Global Internet Forum to Counter Terrorism — and aiming to build on an earlier announcement of an industry database for sharing unique digital fingerprints to identify terrorist content.

But whether some similar kind of collaboration could emerge in future to try to collectively police political spending remains to be seen. Joining forces to tackle the spread of terrorist propaganda online may end up being trivially easy vs accurately identifying and publicly disclosing what is clearly a much broader spectrum of politicized content that’s, nonetheless, also been created with malicious intent.

According to the New York Times, Russia-bought ads that Facebook has so far handed over to Congress apparently included a diverse spectrum of politicized content, from pages for gun-rights supporters, to those supporting gay rights, to anti-immigrant pages, to pages that aimed to appeal to the African-American community — and even pages for animal lovers.

One thing is clear: Tech giants will not be able to get away with playing down the power of their platforms in public.

Not at the Congress hearing next month. And likely not for the foreseeable future.

Featured Image: Mikhail Metzel/Getty Images

Think you hate selfies? The Russian military might ban them.

Russian soldiers may soon be saying goodbye to their Snapchats. 

According to the BBC, the Russian Ministry of Defense, which manages and regulates Russia’s armed forces, has drafted a law that would ban soldiers and military personnel from posting on social media. The ban is expected to take effect in January 2018. 

The ban comes, according to the bill, in the name of national security. Social media posts have, in the past, revealed Russian military details to enemy combatants, the bill claims. For example, in 2015, a Vice reporter was famously able to use a soldier’s series of selfies to confirm Russia’s military involvement in the eastern Ukraine. 

The plan is also meant to prevent Russia’s enemy combatants from tracking the source of social media posts using geolocation. Tweets, for example, can be linked to their GPS coordinates through multiple methods, even if their authors have Tweet Location disabled. 

The Ministry of Defense is not the first Russian government agency to take this step. The Ministry told the BBC that Russia’s FSB, the spy agency formerly known as the KGB, also restricts its staff’s social media content. 

And the move shouldn’t come as a surprise. 50,000 employees of Russian companies co-owned by the state recently received smartphones designed to keep apps from tracking any user activity. And last year, the Kremlin banned LinkedIn, allegedly to protect users’ privacy. 

The Kremlin now, it seems, views social media as a threat not only to privacy, but to national security. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f81931%2f232bd509 8cd2 4786 9c13 f7a611f359ff

Senate Intel committee calls on Facebook to release Russian ads


In an update on the progress of its investigation into Russian interference in the 2016 U.S. election, the Senate Intelligence Committee weighed in on recent revelations that have implicated major tech companies. The committee plans to hear open public testimony from Facebook, Twitter and Google on November 1 pertaining to their role in selling political ads to Russian government entities and fostering an environment in which shadowy foreign-funded political propaganda efforts could thrive.

According to the chairman, Richard Burr, it took time for tech leaders to warm up to the notion that they were responsible for influence campaigns run on their platforms. “I was concerned at first that some social media platforms did not take this threat seriously enough,” Burr said. “The three companies we’ve invited, Google, Twitter and Facebook, will appear in a public hearing.”

Burr made it clear that his committee could not release the ads that Facebook handed over as part of the investigation, but Facebook and the other companies are not constrained by the committee from doing so.

“We don’t release documents provided by to our committee, period,” Burr said. “[It’s] not a practice that we’re going to get into. Clearly if any of the social media platforms would like to do that, we’re fine with them doing it because we’ve already got scheduled an open hearing. We believe that the American people deserve to know firsthand.”

Senate Intelligence Vice Chairman Mark Warner echoed Burr’s statement.

“There will be more forensics done by these companies,” Warner said. “I think they’ve got some more work to do and I’m pleased to say I think they’re out doing that work now.”

“At the end of the day it’s important that the public sees these ads,” he added.

The committee is focused on three areas of the Russian ad scandal. First, Burr and Warner stated that Americans have a right to know the source of social media ads and if they were created by “foreign entities.” Second, when a story is trending, the committee believes that Americans should be able to determine if that trending topic is a result of bots or otherwise artificial engagement. Third, “you ought to be able to go down and take a look at an ad run for or against you like you’d be able to get a look at that content on TV,” Burr said.

The committee reiterated that its investigation had made it clear that Russia’s efforts to interfere with the American political process are ongoing.

“The Russian active measures efforts did not end on election day 2016,” Warner said. “We need to be on guard.”

For its part, Facebook tried to get ahead of Wednesday’s press briefing, printing a full page ad in The Washington Post as damage control for whatever Burr and Warner said about the company’s role in the election and its interactions with the committee.

TechCrunch has reached out to Facebook about its reaction to today’s committee briefing and will update if and when we hear back.

Featured Image: Facebook