All posts in “Mark Zuckerberg”

Facebook on how it affects your mental health: It’s you, not them

Facebook is a symbol of one of the great debates of the 21st century: Is social media a gift to humanity, or is it a curse that drives us further apart and deeper into our own ideological echo chambers? 

There is no simple answer to that question, which is why it frequently becomes a cultural obsession as it did this week, when a recent video surfaced of a former Facebook executive decrying the negative effects of social media. 

Now Facebook is joining the conversation with a lengthy blog post about its efforts to understand how the social media platform affects users’ well-being. The bottom line is that whether or not social media makes us miserable seems to depend on how we use it, say Facebook’s David Ginsberg, director of research, and Moira Burke, a research scientist. 

“According to the research, it really comes down to how you use the technology.”

“According to the research, it really comes down to how you use the technology,” write Ginsberg and Burke. “For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friends — messaging and commenting on each other’s posts.  

Passively consuming social media has been linked to negative effects, whereas active engagement may be capable of boosting well-being, say Ginsberg and Burke. (It’s worth noting that more engaged users are likely more valuable to Facebook’s advertising business.)

That draws a fascinating line between Facebook and critics who argue that social media can have a poisonous effect on people’s self-esteem, their relationships, and their ability to consume and reflect on the news. Facebook’s position seems to be that those unpleasant experiences aren’t caused directly by its product, but by how people engage with the platform. 

That’s a much more optimistic view of social media than what Chamath Palihapitiya, the company’s former vice president for user growth, shared with an audience at Stanford Graduate School of Business last month. 

“The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” Palihapitiya said, describing the habit-forming nature of online interactions (think the rush of receiving comments, likes, and hearts on your social media posts.) 

After these comments became widely publicized this week, he used a Facebook post to clarify that he believes the company is “a force for good in the world.” 

“Facebook has made tremendous strides in coming to terms with its unforeseen influence and, more so than any of its peers, the team there has taken real steps to course correct,” he wrote. 

But Palihapitiya is not the only one alarmed by the way social media influences our behavior. Last month, Sean Parker, Facebook’s founding president, said the company is “exploiting a vulnerability in human psychology” that primes humans to crave validation. 

“Facebook has made tremendous strides in coming to terms with its unforeseen influence.”

Ginsberg and Burke don’t name Palihapitiya or Parker. They do reference scientific studies that are both flattering and unfavorable to Facebook. The company’s own research, in partnership with a Carnegie Mellon University psychologist, found that users who sent or received more messages, comments, and posts to their profile said their feelings of depression and loneliness improved. But another experiment that randomly assigned students the task of reading Facebook for 10 minutes were in a worse mood by the end of the day than those who posted or talked to friends on Facebook. Other research suggests screen time, including social media, takes a toll on teens’ health. 

Negative effects, say Ginsberg and Burke, might be related to the uncomfortable experience of reading about others and comparing yourself negatively to them. Time spent on social media and on the internet might also reduce in-person socialization, which can lead to feelings of isolation. 

Though Facebook has previously commented on its own well-being research, the blog post offers a candid discussion of the negative aspects of social media, along with details about the company’s efforts to understand those dynamics. 

[embedded content]

The post doesn’t contain unexpected revelations, but it does include insights about how Facebook views its controversial role in mediating hundreds, if not thousands, of small moments in a person’s everyday life.

Ginsberg and Burke write that the company has already made significant changes to News Feed by demoting clickbait and false news, optimizing ranking so posts from close friends are more likely to show up first, and promoting posts that are “personally informative.” The blog post also announces the launch of Snooze, a feature people can use to tune out a friend’s posts for 30 days without having to permanently unfollow or unfriend them. 

Ginsberg and Burke add that Facebook will continue to research well-being and make new efforts to understand “digital distraction.” It will also put on a summit next year with academics and industry leaders to “tackle” these complex issues.

While the public might wait for Facebook, and the broader tech and research communities, to solve this riddle, Ginsberg and Burke touch on a sensitive subject: personal responsibility. Their focus on how the effects of social media change depending on a user’s style of engagement — mindless scrolling versus active participation — hints at the possibility that users may need to be more aware of (and adapt) their behavior if they want to feel better.

That might be hard, though, for users who count on being able to choose a thumbs-up or heart and move on with their lives. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f82913%2f3a5c77e4 acf7 4502 8e87 212134a9f7f5

Facebook doesn’t scan Messenger for fake news. But it definitely should

Facebook has been cracking down on fake news, but it’s been missing one very important place of conversation.

Messenger, the standalone app for Facebook’s private messaging feature, is not being included in the company’s fact-checking program. 

In March, Facebook recruited a team of third-party fact-checkers to flag fake news articles that were being shared on its network. It was part of an increased effort to limit the reach of misinformation on the site in wake of Russian interference in the 2016 presidential election. 

Fake news articles now receive a disputed tag when they are posted to the Facebook News Feed. But that’s not the case if a user sends the same article to an individual or group through Messenger. 

However, one Facebook user claims they recently experienced the opposite. After sharing a Breitbart story to someone via Messenger, the person says they received a notification that read, “A link you shared contains info disputed by Politifact,” meaning they were sharing a fake news story.

Image: screenshot

And yet, the individual told Mashable they had not posted or even tried to share the article to their News Feed. (The individual requested anonymity due to the polarizing nature of the story.)

Mashable reached out to Facebook for comment, and we were told by a company spokesperson that the person must have experienced a bug. If true, this is a case where a bug has revealed an inherent flaw in Facebook’s fact-checking system. 

If Facebook was actually committed to curbing fake news on its platform, it would address the problem in all areas of the site, not only the Facebook News Feed. According to the company, more than 1.3 billion people use Messenger every month, and we know at least some fake news articles have been shared on it. 

One reason the company may not scan private Messenger conversations for fake news could be that it doesn’t want to appear to be “creepy.” For example, Facebook drew criticism when it initially announced it would use WhatsApp data to inform the company’s expansive ad network (even though it wasn’t pulling information from private messages). Facebook has repeatedly insisted it does not scan private conversations for advertising. 

For the most part, the private areas of Facebook’s products, which include services like Messenger and WhatsApp, remain untouched. An exception is that Facebook uses automated tools like PhotoDNA to scan for child exploitation images shared within Messenger. But there is no system currently used to detect fake news within Messenger or WhatsApp. 

One of Facebook’s fact-checking partners, Poynter, recently explored how fighting fake news on WhatsApp remained difficult due to the closed nature of the network. 

“WhatsApp was designed to keep people’s information secure and private, so no one is able to access the contents of people’s messages,” said WhatsApp’s policy communications lead Carl Woog in an email to Poynter. “We recognize that there is a false news challenge, and we’re thinking through ways we can continue to keep WhatsApp safe.”

Of course, that could change in the future. 

A Facebook spokesperson told Mashable that the company is working on new and increasingly more effective ways to fight false news stories on all of its apps and services. But until then, it appears that Facebook users will have to do their own fact-checking on Messenger.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f12%2f3375ba7e 0a32 c5c2%2fthumb%2f00001

Zuck says ad transparency regulation would be ‘very good if it’s done well’


Despite Facebook’s effort to rapidly self-regulate in the wake of Russian interference in the U.S. election using Facebook ads, CEO Mark Zuckerberg says he would support government regulation if implemented properly. Meanwhile, Facebook elevated its expense estimates for 2018 to fund security hiring.

“We’re working with Congress on legislation to make advertising more transparent. I think this would be very good if it’s done well,” Zuckerberg said on Facebook’s Q3 2017 earnings call. “And even without legislation, we’re already moving forward on our own to bring advertising on Facebook to an even higher standard of transparency than ads on TV or other media.”

In fact, Zuckerberg started the call fuming, declaring that “I’ve expressed how upset I am that Russians used our tools to sow distrust . . . What they did is wrong, and we’re not going to stand for it.” He noted that when Facebook focuses on something it gets it done, even if it takes time and mistakes, and he’s throwing the weight of the company behind the security effort. In fact, he invoked the way Facebook demolished Google+ and Snapchat, saying “We’re bringing the same intensity to these security issues that we brought to any adversary or challenge that we’ve faced.”

This position still didn’t lead Zuckerberg to show up to today and yesterday’s congressional hearings where Facebook, Twitter, and Google’s general counsels were grilled about Russian election interference. Several congress members requested CEOs show up next time.

Today’s comments come after Zuckerberg wrote in today’s earnings release that “We’re serious about preventing abuse on our platforms. We’re investing so much in security that it will impact our profitability. Protecting our community is more important than maximizing our profits.”

Example of a Facebook ad bought by Russian trolls to divide U.S. voters

Specifically, CFO David Wehner says Facebook plans for expenses to grow 45 percent to 60 percent in 2018 as Facebook invests in better security to thwart Russian election attackers, more content for its Watch tab of original video and research for its long-term bets on artificial intelligence, Oculus and augmented reality. That cash will go toward hiring 10,000 more content and ads moderators (though those won’t all necessarily be full-time employees), doubling its security engineering force and developing new AI to weed out bad actors.

COO Sheryl Sandberg said Facebook will stand by its policy of allowing issue-related ads to be served because of its support for free speech, but she said the company wants to elevate the quality of discourse on the platform.

Wehner admits that Facebook has increased its estimation of false accounts from 1 percent last quarter to 2 percent to 3 percent this quarter, or 41 million to 62 million monthly active users. That’s in part because Facebook said it started using a new technology to calculate these estimates, and because of a spike in false account creation in Vietnam and Indonesia. Facebook said the new estimation technology is also why it now pegs duplicate accounts at 10 percent of monthly active users or 200 million, versus 6 percent last quarter.

Overall, given Zuckerberg’s comments and its expense estimate increase, Facebook seems to be taking the Russian security and ad transparency issues extremely seriously. Though it might seem like this is a prioritization of security over profits, long-term Facebook must be a safe platform for legitimate discussion to maintain its place atop the hill of social networks.

Russian ads aren’t really the problem, Facebook’s opaque algorithms are

Pay attention to the algorithm behind the curtain.
Pay attention to the algorithm behind the curtain.

Image: Justin Sullivan/Getty Images

Much noise has rightly been made about the role Facebook played in the 2016 presidential election. Critics have pointed to a targeted ad campaign by Russian groups as proof that the Menlo Park-based company wasn’t minding the store — and alleged that disaster followed as a result. 

But that argument overlooks one key point: In showing microtargeted “dark ads” to users, Facebook was doing exactly what it was designed to do. The larger problem is not these specific Russian ads (which Facebook refuses to disclose to the public) — or even that Donald Trump was elected president — but the very system upon which the company is built. 

Mark Zuckerberg’s plan to increase transparency on political advertisements, while welcome, falls into the same trap. Yes, more disclosure is good, but what is the remedy when the underlying architecture itself is gangrenous? 

Zeynep Tufekci, author of Twitter and Tear Gas and associate professor at the University of North Carolina at Chapel Hill, made this point painfully clear in a September TED Talk that dove into the way the same algorithms designed to better serve us ads on platforms like Facebook have the ability to be deployed for much darker purposes. 

“So Facebook’s market capitalization is approaching half a trillion dollars,” Tufekci told the gathered crowd. “It’s because it works great as a persuasion architecture. But the structure of that architecture is the same whether you’re selling shoes or whether you’re selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that’s what’s got to change.” 

Tufekci further argued that when machine learning comes into play, humans can lose track of exactly how algorithms work their magic. And, she continued, not fully understanding how the system works has potentially scary consequences — like advertising Vegas trips to people about to enter a manic phase.

This concern is real. Facebook can now infer all kinds of data about its users — from their political views, to religious affiliations, to intelligence, and much more. What happens when that power is made available to anyone with a small advertising budget? Or, worse, an oppressive government?

“Imagine what a state can do with the immense amount of data it has on its citizens,” noted Tufekci. “China is already using face detection technology to identify and arrest people. And here’s the tragedy: we’re building this infrastructure of surveillance authoritarianism merely to get people to click on ads.”

Facebook bills itself as a company striving to bring “the world closer together,” but the truth of the matter is far different. It is, of course, a system designed to collect an endless amount of data on its users with the goal of nudging us toward whatever behavior the company believes is in its best interest — be that purchasing an advertised item, voting, or being in a particular mood

That’s a fundamental problem that cuts to Facebook’s very core, and it’s not one that a new political ad disclosure policy will fix. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f82232%2f517b80db 8ea7 4318 9d1b bbac0c1e4bf1

Zuckerberg’s CZI donates to struggling towns near Facebook


Facebook’s success has led to gentrification and hardship in some towns close to its Menlo Park headquarters. So while the Chan Zuckerberg Initiative has committed more than $45 billion to solving health and education problems worldwide, today it’s strengthening its hyper-local philanthropy.

The new CZI Community Fund will provide $25,000 to $100,000 grants to nonprofits and nonprofit or municipality-backed organizations working to improve education, housing, homelessness, immigration, transportation and workforce development in Belle Haven, East Palo Alto, North Fair Oaks and Redwood City, California. For reference, the average rent in East Palo Alto just two miles from Facebook HQ went up 24 percent in the past year alone.

“The Bay Area is our home. We love our community and are so proud to be raising our two daughters here,” writes CZI co-founder Priscilla Chan, Mark Zuckerberg’s wife. “But listening to the stories from our local leaders and neighbors, there is still a lot of work to do.”

The CZI has already backed some local projects, including criminal justice reform in California, and put $5 million toward Y Combinator startup Landed that helps school teachers pay for home down payments in districts close to Facebook HQ. It also donated $3.1 million to Community Legal Services in East Palo Alto that helps families impacted by the local housing shortage who need legal protection, in some cases from wrongful evictions. Plus CZI put $500,000 into the Terner Center for Housing Innovation at UC Berkeley to develop long-term answers to the regional housing crisis.

Organizations seeking funding from the CZI Community Fund can apply before December 1. They’ll be evaluated on the basis of alignment with the fund’s mission, impact potential, leadership, collaboration with other organizations, community engagement and fiscal responsibility to ensure funds aren’t wasted on overhead.

Map showing Facebook’s headquarters circled in blue, and the four nearby towns supported by the CZI Community Fund

Back in 2014, TechCrunch advocated for more of this hyper-local philanthropy by tech companies. At the time, Google was helping to pay for free bus passes for kids trying to get to school, after-school programs and work.

While tech giants can have global impact with scalable apps, the high salaries they pay can lead to rising housing and living prices in nearby areas. That’s fine for their employees, but can cause trouble for lower-income residents as well as the contractors these corporations employ to run their cafeterias or sweep their floors.

There are certainly worthy causes everywhere, and some in the developing world, like anti-malaria mosquito nets, can do a lot of good for a low price. But if tech companies want to be seen as good neighbors and offset the damage they do to nearby communities, they need to give back locally, not just globally.

Featured Image: Peter Barreras/Invision/AP