All posts in “Tech”

Twitter is working on a policy to fight deepfakes and it wants users’ help

Twitter is the latest social platform to confirm that it’s working on a policy to address the rise of deepfakes.

The company plans to update its policies in order to address “synthetic and manipulated media,” Twitter revealed. Twitter’s announcement follows earlier comments from Facebook CEO Mark Zuckerberg, who said earlier this year that the social network was “evaluating” potential policies. 

Deepfakes, or “synthetic and manipulated media,” as Twitter calls them, are videos that have been realistically altered using artificial intelligence. The issue was thrust into the spotlight earlier this year after an altered video of House Speaker Nancy Pelosi went viral on Facebook.

That video, which was eventually debunked by Facebook fact checkers, wasn’t technically a deepfake since it was merely slowed down and not manipulated with AI, but it nevertheless brought attention to the potential danger of sharing video that has been deceptively edited on social media. 

Twitter has previously banned fake porn videos with celebrity faces, but doesn’t yet have a broad policy to address manipulated video in other contexts.

In a tweet, Twitter said it plans to define synthetic media as “media that’s been significantly altered or created in a way that changes the original meaning/purpose, or makes it seem like certain events took place that didn’t actually happen.”

The company didn’t indicate what its policy would look like, or when it might curtail these types of images, but suggested it would prioritize physical safety and potential for “offline harm.”

Twitter plans to solicit feedback from its users and and other experts before coming up with a more exact policy — similar to the wide-ranging approach its taken to improving conversations

If recent history is any indication, though, it will still be some time before Twitter has a concrete policy ready to implement. The company announced it would ban “dehumanizing language” on its platform last September, but it didn’t start implementing the new policy until nearly a year later. And even then, the new rules so far only apply to a narrow subset of language aimed at religious groups (Twitter says it plans to expand the rules, but hasn’t provided a timeline for doing so). 

“We need to consider how synthetic media is shared on Twitter in potentially damaging contexts,” Twitter said Monday.

“We want to listen and consider your perspectives in our policy development process. We want to be transparent about our approach and values.”

Instagram adds ‘false information’ labels to prevent fake news from going viral

Facebook says it’s getting more serious about preventing false information from going viral on Instagram.

The app will add “false information” labels that obscure posts that have been debunked by Facebook’s fact checkers, the company announced. The labels, which will roll out over the next month, will appear on posts in Stories and Instagram’s main feed. Users will still be able to view the original post, but they’ll have to click “See Post” to get there. 

The update comes less than two weeks after the Senate Intelligence Committee released the second volume of its report on interference in the 2016 election, which called Instagram “the most effective tool” used by the Internet Research Campaign. 

Instagram will also warn users who attempt to share a post that has previously been debunked. Before the post goes live, they’ll see a notice that fact checkers say it contains false information, with a link to more information. They can still opt to share the post with their followers, but it will appear with the “false information label.”

Instagram will warn users who share posts that have been de-bunked by false checkers.

Instagram will warn users who share posts that have been de-bunked by false checkers.

Image: Instagram

Instagram has been working with third-party fact checkers for some time, but up until now the service was far less aggressive with misinformation than Facebook. 

While Facebook down-ranks debunked posts in its News Feed, Instagram hasn’t take similar steps, and has instead focused on removing those posts from public-facing areas of the app, like hashtag pages and its Explore section. 

Now, Instagram says it will act on posts in users’ feeds in an effort to help prevent false information from going viral.

“In addition to clearer labels, we’re also working to take faster action to prevent misinformation from going viral, especially given that quality reporting and fact-checking takes time,” the company writes. “In many countries, including in the US, if we have signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checkers.

The steps announced Monday are the most aggressive that Facebook has taken to reduce the spread of viral misinformation on Instagram.

The company has long downplayed the role Instagram played in 2016 election interference. Facebook previously told Congress that only 20 million Instagram users saw posts from the IRA, though experts have long warned the numbers were likely much higher. The Senate report released earlier this month revealed that the top two most popular IRA-run Instagram accounts alone generated more than 46 million interactions.

“On the basis of engagement and audience following measures, the Instagram social media platform was the most effective tool used by the IRA to conduct its information operations campaign,” the report said

Instagram chief Adam Mosseri acknowledged on Twitter last week that Instagram was “still playing catch up” in its fact checking efforts. 

Facebook offers special protections for election-related accounts

Facebook wants to protect elected officials from the dangers of Facebook. 

On Monday, the social media giant announce a new service designed to further secure Facebook accounts affiliated with election campaigns. Dubbed Facebook Protect, the program hopes to help current or would-be elected officials — along with any of their staff — follow best cybersecurity practices and avoid getting hacked. 

The program, of course, is voluntary. However, that doesn’t mean there isn’t a long list of people who Facebook suggests should hop on this security train. 

Specifically, Facebook says the service is open to “candidates, elected officials, federal and state departments and agencies, and party committees, as well as their staff.”

With cybersecurity and email hacks playing a major role in the 2016 U.S. presidential election, and reports that social media-directed interference in U.S. elections has only grown in the intervening years, this promise of added security from Facebook is a welcome one. 

“By enrolling, we’ll help these accounts (1) adopt stronger account security protections, like two-factor authentication, and (2) monitor for potential hacking threats,” explains the launch page. 

Lock it down.

Lock it down.

Image: screenshot / facebook

The offering calls to mind Google’s Advanced Protection. That program permits anyone who feels their account may be the target of sophisticated hackers to enable an additional layer of digital security protections. Facebook Protect, at least on the outside, seems like a slightly watered down version. 

Regardless, it’s definitely better than nothing — and late is better than never. 

Facebook correctly cops to the fact that nothing, not even Protect, will 100 percent secure your account from a dedicated hacker. Instead, insists the company, Protect throws additional roadblocks in their way. 

“While we may never be able to catch every bad actor,” warns Facebook, “this program is one of several steps we’re taking to make it harder for account compromises to occur.”

Importantly, accounts have to enroll in this Facebook Protect — it’s not automatic. To do so, eligible accounts (the requirements of which are determined by Facebook) fill out a form and then follow the  requisite steps. 

It should be noted that Facebook Protect, while an overdue addition to the social media security landscape, still fails in protecting election campaigns from one serious risk specific to Facebook: the company’s own problematic policies

But hey, you have to start somewhere. 

Facebook will ban ads that discourage people from voting

Lying in Facebook political ads is ok — as long as the lie isn’t infringing on people’s right to cast their vote.

During a conference call Monday in which Facebook detailed its latest efforts to bolster election integrity and stop the spread of misinformation, Mark Zuckerberg announced some new measures the company is taking to fight voter suppression. 

Voter suppression is a term that describes efforts to prevent people from voting by spreading anti-voting sentiment, sharing incorrect information about how to vote, and even undermining get out the vote efforts and voting infrastructure

Now, Facebook will outright prohibit ads that discourage people from voting. For example, Facebook wouldn’t allow someone to publish an ad that suggests that voting is pointless.

Facebook expanded its policies around voter suppression content ahead of the 2018 U.S. midterms. That included prohibiting content that spread false information about how and when to vote, incorrect voter qualifications (such as misleading I.D. requirements), and suggestions of violent or race-based retribution for voting. Now, the new policy specifically addresses anti-voting sentiment in paid ads.

Facebook also says that it is proactively removing and preventing the posting of this content before people report it: “Our Elections Operations Center removed more than 45,000 pieces of content that violated these policies — more than 90% of which our systems detected before anyone reported the content to us,” the blog post explaining the change reads.

During the question and answer portion of the call, Zuckerberg answered questions about how the new policy would work in practice. For example, recent reports detailed that Facebook would allow politicians to run ads that contain false information — a sentiment that Zuckerberg repeatedly defended on the call on the basis of free political speech. Reporters asked, if a politician ran an ad that contained false information about voting, which policy would take precedence?

Zuckerberg answered that the anti-voter suppression rules would win out.

“The voter suppression rules would be paramount in that case,” Zuckerberg said. “We give very broad deference to political speech… but it’s not everything.”

Apparently, it is possible for a politician to cross a line. 

Russian trolls on Instagram focus on Joe Biden

As the 2020 election heats up, so, too, do the trolling and interference efforts. And we already know one of the biggest targets so far: former Vice President Joe Biden. 

The information comes as Facebook announced on Monday that it removed dozens of accounts and pages in response to “coordinated inauthentic behavior” by those accounts. The accounts were spread across four networks: three in Iran and one in Russia. 

The Russian network showed some telling links to our old friends at the Russia-based Internet Research Agency (IRA). It was spreading disinformation via 50 Instagram accounts and one Facebook account. And, according to a company who worked with Facebook, Biden was their top target. 

Ben Nimmo, director of investigations for social media analysis firm Graphika, told CNN that it “looked like there was a systematic focus on attacking Biden from both sides.”

A Pro-Trump, Anti-Biden post flagged as part of a Russian troll campaign

A Pro-Trump, Anti-Biden post flagged as part of a Russian troll campaign

Image: Graphika / Screenshot

According to Graphika, a number of the accounts showed support on the Democratic side for Sen. Bernie Sanders. 

One of the anti-Biden from the left ads generated by a Russian troll farm in an attempt to affect the 2020 election

One of the anti-Biden from the left ads generated by a Russian troll farm in an attempt to affect the 2020 election

Image: Graphika / Screenshot

Nimmo also noted Democratic candidates Elizabeth Warren and Kamala Harris were targeted, but more from a “character building” perspective, in which the fake accounts showed support for those candidates. 

The entire report, available here in PDF, is pretty eye-opening for lots of other details, like the similarity between the memes used by these accounts and ones used in 2016 by the IRA. In fact, the overlapping nature of the 2016 efforts and those of these now-banned accounts led Graphika to name the effort “IRACopyPasta.” 

And even though the operation was carried out almost exclusively on Instagram, the campaign heavily utilized screenshots of tweets.