Mark Zuckerberg is here to save us from Mark Zuckerberg.
On Tuesday, just four days after the Facebook CEO announced his intention to revamp the News Feed in favor of “high quality content,” we were gifted a sneak peek at the means by which he will deliver us from the scourge of so-called fake news.
It takes the form of a survey, and, sadly, we regret to inform you that things aren’t looking so good.
In a 464-word decree, the Zuck promised his disciples that the power to decide what is right and true shall henceforth be in their hands. You see, it will be up to them — the very same people who believed the Pope endorsed Donald Trump — to determine what news sources are to be trusted.
There are real stakes here, as publications that Facebook deems “trustworthy” will be prioritized on the site.
Publications that Facebook deems “trustworthy” will be prioritized on the site.
And just how are Facebook users going to communicate their well-informed and totally based-in-reality opinions about, say, InfoWars, to the Facebook product teams? Why that would be by filling out a 2-question survey.
Published by BuzzFeed News (and confirmed to Mashable by a Facebook spokesperson as authentic), the survey is perhaps meant to inspire confidence in its simplicity.
Does it succeed in that aim? We’ll let you be the judge.
Here is the survey that could profoundly alter the news landscape for 2 billion people, in its entirety:
“Do you recognize the following websites?” (Yes/No)
“How much do you trust each of these domains?” (Entirely/A lot/Somewhat/Barely/Not at all).
And there you have it. The two questions that, like some sort of protective incantation, are to be asked over and over again to credulous Facebook denizens across the land. They are meant to help save us from the blight of misinformation, and the associated illnesses that come with it.
Importantly, not everyone will get a chance to weigh in. Facebook plans to survey a random sample of users, and believes that its methodology will withstand attempts by ideologically biased individuals to manipulate the process.
Which, if the company is half as successful in doing that as it was at stopping the spread of “fake news” in the lead up to the 2016 presidential election we should be in good hands. Oh, wait.
January 23, 2018 / Comments Off on Mark Zuckerberg’s answer to ‘fake news’ is this garbage 2-question survey
With each passing day comes yet another reason to question the notion that the long arc of the universe bends toward justice. However, this year, in particular, has made it resoundingly clear that — regardless of the direction of that arc — the process by which it bends manifests with stuttering jolts and fits. Things seem one way to many people, until, for whatever reason, all of the sudden everyone realizes they’re not.
It is a similar reckoning that has befallen the do-no-wrong darling of the tech industry: social media. Long heralded by its profits as a digital panacea for our fractured world, services like Facebook and Twitter have instead come to both represent and fuel our darker natures.
And, over the course of 2017, we’ve finally started to realize it.
While for many Americans, naming “complicit” the word of the year was a sadly fitting choice, those in Silicon Valley have found themselves uttering another term likewise befitting a collective fall from grace: disbelief. Disbelief that their once loved platforms have, like a late-night Cinderella, transformed from the belle of the ball to unwanted stepchild. Disbelief that, from the shiny and seemingly unassailable promise of bringing us together to a pernicious network of disinformation tearing us apart, social media has worked its way into our lives not like a cure but a cancer. It’s rotting us, and the country along with it, from the inside out.
But it’s not like no one warned them. People did. It’s just that, sadly, the Mark Zuckerbergs of the world were too busy staging photo ops with their data serfs to stop and listen to the concerns.
The world took notice.
Trump, the Russians, and ‘fake news’
Perhaps the single most headline-grabbing truth of social media to be revealed over the course of 2017 was just how much of a role it played in electing Donald Trump. Initially brushed off as a “pretty crazy idea,” the fact that platforms like Facebook distributed misinformation on a massive scale in the lead up to and following the 2016 presidential election is now widely accepted. And while the troubling application of the service to spread so-called “fake news” was not limited to the U.S., it was there that it first so prominently reared its multi-pronged head.
That a Russian troll farm was easily able to weaponize social media to its ends was not lost on Americans, or many of their elected officials, and calls for regulation moved into the mainstream. Republican Sen. John Kennedy of Louisiana went so far as to tell Facebook’s general counsel that “your power scares me.”
That power, of course, is not limited to Facebook. Twitter, too, struggled and continues to struggle with the actors using its platform in ways that would likely upset the average Tom, Dick, or Harry. Just recently the company identified 36,000 bots and 2,752 accounts reportedly controlled by individuals tied to the Russian government which operated in the lead up to the 2016 presidential election. At least one of these accounts, @Jenn_Abrams, was apparently so convincing that it was published in Mashable, The Washington Post, Buzzfeed, CNN, and The New York Times.
Instagram, which, of course, is owned by Facebook, didn’t escape this mess unscathed either. Russian-backed ads seemingly designed to influence the 2016 presidential election were also deployed on the service more associated with pug pics than Putin.
Taken as a whole, this worked to poison an already toxic political discourse, and pushed people even further into their rapidly collapsing reality bubbles. Sadly, it’s not getting better any time soon.
Racism, sexism, and all the other rot
As unpleasant as it may be to admit, those who maliciously abuse the online services finely tuned by the likes of Facebook and Twitter to monopolize our attention aren’t always directed by foreign governments looking to sow discord. Rather, a lot of the garbage found these days on social media originates much closer to home.
Surprising exactly no one, it turns out the United States specifically, and the world in general, is full of racist and misogynistic assholes. And, well, they have thrived on social media. Putting aside platforms like Gab, which seem explicitly designed to provide a platform for hate speech, it’s getting harder and harder to dip a toe into the online pool without acquiring some sort of associated stink.
Twitter, in particular, has morphed into such a teeming mass of harassment that the company was forced to release a roadmap laying out the steps it plans to take to curb abuse. And sure, better reporting mechanisms are a good thing, but that’s like offering an improved bandaid while the patient bleeds out.
But it’s not just the racists souring social media for the rest of us — at least one company has demonstrated itself as, at times, complicit (there’s that word again) in the poisoning of its well. Facebook, for example, positioned itself to directly profit off discrimination. In 2016, a ProPublica investigation revealed that the advertising giant was allowing advertisers to exclude users based on race. Don’t want to show housing ads to African Americans? Facebook had you covered. The company promised to change the system to safeguard from abuse in February, but the fixes didn’t work. Facebook temporarily stopped offering the feature in November after it was called out yet again.
That not bad enough? Facebook also allowed advertisers to pay for ads targeting groups like “Jew haters” and people who were “interested in” shockingly repugnant statements like “Hitler did nothing wrong.” When confronted with this, Facebook COO Sheryl Sandberg understandably denounced it, but by that point the algorithmically driven racism had already left the ad-sales barn.
This has all been roundly condemned, and the social media giants of the word promise to do better, but there’s only so many times you can tell someone “that’s not who I really am” before they start to see through the nicely packaged facade. And, over the course of 2017, Americans specifically, and the world in general, have started to do just that.
Your privacy and life as a lab rat
While the notions of privacy and social media seem inherently at odds, there are a few basic lines that people don’t want crossed. Social media companies, for their part, seem to only pay lip service to the few lines they’re even willing to acknowledge exist.
Facebook in particular isn’t content with just knowing what you do while using one of its many properties, and has long collected information about you while you browse the open web. This is all in service of building more complete profiles on its users in order to better target them with ads.
Unsurprisingly, even those that still use the service are starting to revolt. Some are convinced that Facebook uses the microphones on their computers and phones to listen in on their conversations to better serve them ads (Facebook denies this), and have taken active and elaborate steps to fight back. Others have started employing online tools in an attempt to peel back the company’s curtain and see just how much it knows about them. Surprise, it’s a lot.
And what companies like Facebook do with this information is extremely upsetting. No one likes to think they are being experimented on, and yet your friendly Menlo Park engineers have done just that. It was revealed in 2014 that the company ran a study to see if it could alter people’s moods by showing them a disproportionate number of uplifting or downer statuses in their news feeds. Basically, someone at Facebook thought it would be interesting to mess with people’s emotional states (for science!) and so the company went ahead and did it.
The indictment, however, is broader than just the Facebook and Twitter-specific critiques. A new study suggests that compulsively checking social media during a disaster — a time when, at least theoretically, getting rapid updates could be helpful — can cause psychological distress. This suggests that even if the purveyors of our digital fix were invested in our well being, their main cure would have to be shutting their own doors.
Kicking the habit ain’t easy
Still, simply knowing something is bad for you — and even disliking it for that — isn’t always enough of a reason to drop it. Addiction is a powerful thing, and the dopamine generated by compulsively checking social media has become this country’s preferred high.
But even Facebook’s founding president, Sean Parker, has some regrets. He noted in a November interview that the conscious intention of the company’s founders was to get people essentially hooked.
“The inventors, creators — it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom at Instagram … it’s all these people — understood this consciously, and we did it anyway,” he explained. “God only knows what it’s doing to our children’s brains,” he added.
So where does this leave us? Despite all the evidence that social media is both bad for us individually and collectively, we show no signs of cutting back. The number of Twitter monthly active users has tapered off to around 330 million, and Facebook’s monthly user base continues to grow — hitting 2 billion this year. Instagram, meanwhile, has reached 800 million MAUs and shows no signs of stopping its growth.
If anything, these numbers demonstrate one of the wonderfully confusing things about being human — that we can hold something dear while simultaneously despising it. There is some hope, however. If 2017 was the year we realized our addiction was killing us and turned against social media as a result, perhaps 2018 will be the year we finally kick it.
December 9, 2017 / Comments Off on This was the year we turned on social media
But that argument overlooks one key point: In showing microtargeted “dark ads” to users, Facebook was doing exactly what it was designed to do. The larger problem is not these specific Russian ads (which Facebook refuses to disclose to the public) — or even that Donald Trump was elected president — but the very system upon which the company is built.
Mark Zuckerberg’s plan to increase transparency on political advertisements, while welcome, falls into the same trap. Yes, more disclosure is good, but what is the remedy when the underlying architecture itself is gangrenous?
Zeynep Tufekci, author of Twitter and Tear Gas and associate professor at the University of North Carolina at Chapel Hill, made this point painfully clear in a September TED Talk that dove into the way the same algorithms designed to better serve us ads on platforms like Facebook have the ability to be deployed for much darker purposes.
“So Facebook’s market capitalization is approaching half a trillion dollars,” Tufekci told the gathered crowd. “It’s because it works great as a persuasion architecture. But the structure of that architecture is the same whether you’re selling shoes or whether you’re selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that’s what’s got to change.”
Tufekci further argued that when machine learning comes into play, humans can lose track of exactly how algorithms work their magic. And, she continued, not fully understanding how the system works has potentially scary consequences — like advertising Vegas trips to people about to enter a manic phase.
This concern is real. Facebook can now infer all kinds of data about its users — from their political views, to religious affiliations, to intelligence, and much more. What happens when that power is made available to anyone with a small advertising budget? Or, worse, an oppressive government?
“Imagine what a state can do with the immense amount of data it has on its citizens,” noted Tufekci. “China is already using face detection technology to identify and arrest people. And here’s the tragedy: we’re building this infrastructure of surveillance authoritarianism merely to get people to click on ads.”
Facebook bills itself as a company striving to bring “the world closer together,” but the truth of the matter is far different. It is, of course, a system designed to collect an endless amount of data on its users with the goal of nudging us toward whatever behavior the company believes is in its best interest — be that purchasing an advertised item, voting, or being in a particular mood.
That’s a fundamental problem that cuts to Facebook’s very core, and it’s not one that a new political ad disclosure policy will fix.
October 27, 2017 / Comments Off on Russian ads aren’t really the problem, Facebook’s opaque algorithms are
On Tuesday, the company announced a series of reforms aimed at disclosing more information about its ads. This new policy followed reports that the social media behemoth’s own tools were used by Russia-linked groups in an attempt to influence the 2016 presidential election, and represents a good first step toward cleaning up Twitter’s ecosystem.
And while the moves may indeed be that much needed step in the right direction, they alone will not end the disinformation that seems to thrive on Twitter.
What’s more, Twitter’s policies are just that: policies. The company could, at any time, roll these changes back. And that’s a problem. Preventing a repeat of the still not fully understood 2016 Russian misinformation campaign is going to require both a better combating of troll farms and some form of ad regulation — which is, not coincidentally, currently under consideration in Congress under the name of the Honest Ads Act.
This thought is at least partially shared by Democratic Rep. Adam Schiff, who issued a statement that applauded Twitter’s efforts but acknowledged there is still much work to be done.
“Transparency in advertising alone, however, is not a solution to the deployment of bots that amplify fake or misleading content or to the successful efforts of online trolls to promote divisive messages,” reads the statement.
“Next week the Intelligence Committee will hold an open hearing with representatives from Twitter, Facebook, and Google to probe Russia’s use of social media platforms to disseminate propaganda, a hearing that I hope will expose more to the public about Russia’s pernicious campaign to influence U.S. political processes in 2016 and begin to identify ways we can combat it in the future.”
Legislators seem pretty aligned on Twitter’s new political ad transparency: a good step, but doesn’t make them immune to further regulation pic.twitter.com/NfZwpAWGcU
Even if the company’s efforts fall short in Schiff’s mind, it’s important to give credit where credit is due. Twitter has promised to launch what it’s calling a “Transparency Center” that will — surprise — attempt to bring transparency to the ads that run on the platform.
It will show “all ads that are currently running on Twitter,” the company said, as well as how long they’ve been running, the “ad creative associated with those campaigns,” and which ads are targeted at you.
Taking it a step further, Twitter will also specifically note which ads it deems to be some form of “electioneering.” And just what does that mean?
“Electioneering ads are those that refer to a clearly identified candidate (or party associated with that candidate) for any elected office,” explained Twitter. “To make it clear when you are seeing or engaging with an electioneering ad, we will now require that electioneering advertisers identify their campaigns as such. We will also change the look and feel of these ads and include a visual political ad indicator.”
Basically, Twitter sees which way the political winds are blowing, and is trying to get its ad-house in order on its own terms before it’s forced to do so by the U.S. government. Unfortunately for both the company and the American people, this move may fall under the particularly sad category of “too little, too late.”
Because while the steps announced today are important, they’re not enough. Twitter has repeatedly promised to improve on countless fronts — from targeted abuse to proliferating bots — and yet over the platform’s 11 years those problems have, if anything, only gotten worse. When it comes to the documented misuse of its ad platform for political purposes, it’s past time for mandated disclosure backed by the force of law. A Transparency Center, while nice, just isn’t going to cut it.
October 24, 2017 / Comments Off on Twitter’s ad ‘Transparency Center’ is a good first step, but doesn’t solve the problem
Twitter has deleted tweets that could be helpful to investigators currently examining Russia’s suspected manipulation of the social network during the 2016 presidential election, U.S. government cybersecurity officials told Politico.
According to the officials, Twitter is either unable or unwilling to retrieve a “substantial amount” of tweets from bots and fake users spreading disinformation. Those users, which have been tied to Russia, have since deleted those tweets.
It turns out, that’s how Twitter is supposed to work. Twitter’s guidelines for law enforcement merely state, “Content deleted by account holders (e.g., tweets) is generally not available.”
A Twitter spokesperson told Mashable that Twitter has “strong policies in place to protect the privacy of our users.” The company declined to comment on the specific deletion policy.
Historically, Twitter has been accused of being less than fully forthcoming with federal investigators. At its recent Senate briefing, Virginia senator Mark Warner, the ranking Democrat on the U.S. Senate Intelligence Committee, called the company’s presentation “frankly inadequate on every level.”
Twitter sees things differently: “We have committed to working with committee investigators to address their questions to the best of our ability,” a company spokesperson told Mashable.
The company declined to comment on whether it is attempting to retrieve the deleted tweets, or whether it will present them to investigators if retrieved.
With access to all of the tweets from those accounts, the investigators might be better able to construct a timeline of events and figure out the account holders’ goals. But, depending on Twitter’s ability to reconstruct its own past, those tweets may be gone forever.
October 13, 2017 / Comments Off on Report: Twitter deleted tweets related to the Russian investigation