All posts in “Facebook”

How Facebook prioritizes privacy when you die


Should your parents be able to read your Facebook messages if you die? Facebook explained why it won’t let them in a post in its Hard Questions series today about social networking after death.

Facebook admits it doesn’t have all the answers, but it has come up with some decent solutions to some issues with what it calls Memorialized Profiles and a “Legacy Contact.” When you pass away, once Facebook is informed, the word “Remembering” appears above your name on your profile and no one else can sign in to your account.

The Legacy Contact is a friend you select in your Manage Account Settings while you’re still alive, though they’re not informed until your profile is memorialized. They can pin a post atop your profile, change your profile pic, respond to friend requests or have your account removed. But Facebook explains they can’t log into your account, change or delete old posts, remove friends or read your messages.

Similarly, Facebook won’t allow parents or anyone else to read your messages after you die. That’s because “In a private conversation between two people, we assume that both people intended the messages to remain private,” writes Monika Bickert, Facebook’s Director of Global Policy Management. The Electronic Communications Privacy Act and Stored Communications Act may also prohibit it from sharing private communications even with parental consent.

Facebook also tries to minimize the emotional impact of losing a loved one by no longer sending birthday reminders about writing on their wall. But there are still plenty of opportunities for hurt feelings. Facebook’s On This Day feature and others can surface old content from when that person was still alive, creating an unexpected experience of having to think about their death.

The company has built features to enhance empathy with its users, allowing them to avoid unnecessarily seeing their exes on the app after a break-up. But it’s tough to know what will be a sweet nostalgic reminder and what will be a heart-wrenching spiral into the past.

What’s important is that Facebook is at least thinking and talking about these issues. Now at 2 billion users, Facebook has become a ubiquitous utility that impacts every phase of our lives. “There’s a deep sense of responsibility in every part of the company,” says Facebook CPO Chris Cox. “We’re getting to the scale where we have to get much better about understanding how the product has been used.”

Facebook downranks video clickbait and fake play buttons


Ever gotten tricked into clicking a fake play button on Facebook that opens a link instead of starting a video? I did, repeatedly, and wrote a story in 2014 titled “Yo Facebook, Ban Links With Fake Video Play Buttons”.

Now Facebook is doing just that. Today it started downranking the News Feed presence of links that display a fake play button in the preview image, as well as videos that are actually just a static image uploaded as a video file. Publishers who use these scammy tactics will see a major decrease in the distribution of these stories. Facebook won’t completely delete these posts, though, unless they violate its other policies.

Here are two examples of fake play buttons that spammers used to steal your clicks:

Facebook has prohibited the use of fake play buttons in advertisements under its policy against depicting non-existent functionality for a few years, News Feed Product Manager Greg Marra tells me. But the scourage has remained in the News Feed.

“We’ve heard from people who are frustrated by fake play buttons” Marra says, hence today’s update. “Spammers are using these tactics to trick people into clicking links to low quality web pages.Facebook tells me its now training its machine vision artificial intelligence to classify and detect fake play buttons in preview images.

“While the prevalence is statistically low, the frustration expressed by people who use Facebook who encounter these deceptive practices is high” a spokesperson tells me.

Facebook says that if publishers want to denote there’s a video behind a link, they should indicate that through Open Graph meta tags. They could also use words like “Watch” or “Video” in the headline or description.

Fake video play buttons in News Feed link previews like the one on the left can mislead people into clicking out to ad-covered sites as shown on the right.

Facebook has had a similar problem with publishers looping pre-recorded videos and calling them live, or just putting up a computer graphic countdown and calling it Live. TechCrunch called on Facebook to ban these shenanigans back in January, and it cracked down on them in May.

There’s also been the issue of publishers putting fake Instant Articles “Lightning Bolt” icons on the preview images of links to non-Instant Articles on the standard web. That’s because people are more likely to click Instant Articles since they load faster.

Meanwhile, Facebook’s emphasis on video in News Feed has inspired the new menace of publishers uploading a static image as a video to get more eyeballs. These static image videos will be downranked too. Facebook is using a “motion scoring” system that detects movement inside a video to classify and demote these clips.

Today’s changes come as part of a massive, multi-pronged atack on clickbait. Facebook now downranks headlines that are misleading or withhold information in many languages, shows fewer links overshared by spammers, works with outside fact checkers to demote false news, promotes iand now shows Related Articles with different angles to make people suspicious of exaggerated clickbait.

With each of these updates, Facebook chips away at the clickbait problem, leaving more room in the News Feed for legitimate content. Getting burned by trying to watch a video which is just endless minutes of the same image erodes trust in the News Feed, making people less likely to watch videos in the future.

By excising these annoying experiences, users may be willing to browse longer, view more videos from friends and publishers, and watch lucrative video ads that fund Facebook’s soaring profits.

Tech is not winning the battle against white supremacy

Content warning: This post contains racial slurs, homophobic language and very graphic depictions of racism and violence.

If you were just paying attention to press releases this week it’d be easy to believe that tech companies are winning the war on hate. Responding to the violence in Charlottesville, Mark Zuckerberg solemnly reflected that there is “no place for hate in our community.” Snapchat announced that hate speech “will never be tolerated” on its platform. YouTube reassured us that helpful tools are on the way. Tech companies fled Trump’s dual business councils to protest his claim that some white supremacists are “very fine people.”

In other headlines, a coalition of web providers made a controversial and unprecedented choice to yank their services out from under the Daily Stormer, a white supremacist news site. Days later, Cloudflare abandoned the site to the whims of whoever feels like DDoSing it. Those decisions, part of the “no platforming” philosophy which would deny hate speech purveyors a place to assemble and share their views, will likely have many reverberations in the days to come. For now, some things remain very much the same.

Unfortunately, while this week’s burst of industry energy might suggest otherwise, hate groups are alive and well, making little if any effort to conceal their presence on all of the major social networks. Whether it’s 4chan or Facebook, if you go looking for hate online, you’ll find it. Dredging up racist, anti-Semitic content often in seeming violation of a company’s stated policy takes seconds — trust me, I went looking.

On something like Facebook, hate festers just under the paper-thin layer between a user’s social sphere and the platform at large. On a network like Twitter, it’s right on the surface, bobbing unpleasantly along down the stream with dog photos and journalist chatter. For anyone surprised about the terrible events that unfolded in Charlottesville: You can find hate anywhere you look and you don’t have to look very hard.

I took a grim tour around some of the major social sites into which we sink our hours to see what just a little bit of casual searching could find — yet algorithms often can’t (or won’t). Again, this content is graphic and disturbing, but pretending it isn’t there won’t make it go away.

Facebook

On Facebook, white supremacist memes thrive, even in wide-open, public communities. Though plenty of hate just sits out in the open, some users skirt detection by using a kind of unsearchable, far-right code language. Facebook might pick up on the anti-Semitic slur “kike,” but by swapping that for “kayak” the content flies under the radar. I was surprised to see that surrounding words in multiple parentheses, also called an “echo,” remains common practice to denote something or someone as Jewish. These symbols were established as part of the shallowly submerged white supremacy lexicon more than a year ago.

References to 1488 also remain common, where 14 is a nod to the “14 words” or “We must secure the existence of our people and a future for white children,” a popular mantra with white supremacists and white nationalists. The double 88 is usually a nod to the 8th letter of the alphabet, or “heil Hitler.”

Small waves of white supremacist memes crest and fall, and much like Facebook’s fake news problem, each wave has another set right behind it and there are many oceans. When I spent some time looking through these communities this week, a particularly popular meme remixed the incredible violence of a counter-protester rolling off of a now infamous ash-gray Dodge Charger with a broad array of anti-black racist memes, some of them drawing from popular mainstream memes, like “the floor is” joke. Another pictured George Washington driving the Charger through the crowd.

One public community I found easily hosted a live stream of Saturday’s white supremacist rally in Charlottesville, the full video shot from the perspective of one of the torch-bearing attendees. It felt surprising that so much of this content was just sitting right out in the open on a social network that connects faces to names.

Following Charlottesville, Facebook cracked down, removing a slew of white supremacist and white nationalist pages. Among them: Right Winged Knight, Right Wing Death Squad, Awakening Red Pill, Physical Removal, Genuine Donald Trump, Awakened Masses, White Nationalists United, Vanguard America, Radical Agenda: Common Sense Extremism and the personal page of Chris Cantwell. Many, many others remain as Facebook continues to rely on users flagging content themselves — a deeply flawed method that’s proven far more effective as a tool for harassing LGBTQ users and black activists than ridding the platform of hate.

In his statement on Wednesday, Zuckerberg did not meaningfully clarify how Facebook will determine what stays on its platform and what goes. Though he noted that “when someone tries to silence others or attacks them based on who they are or what they believe, that hurts us all and is unacceptable,” it does not appear to be unacceptable on Facebook.

Asked how its policy might be evolving, Facebook told me that it does not tolerate hate speech or posts praising acts of violence or hate groups on its platform. This policy, like all policies, is open to interpretation and it’s possible that interpretation could shift further over time.

Reddit

In spite of Reddit’s mostly hands-off policy and reliance on subreddit-specific moderators, racism on Reddit often takes quirkier forms meant to avoid potential detection. In true Reddit style, overtly racist posts and comments are often played off as self-parodies, draping a thin layer of self-referential humor over what is usually just outright white supremacy. On one thread, users enthusiastically counted up from the number 1,488,000. On subreddits like /r/greentext, users post screencaps of posts from 4chan, host of some popular far right and white supremacist communities. They’re careful not to post links to 4chan itself and by screencapping they can avoid searchable text while still replicating most of the content.

In late 2015, Reddit rid itself of some popular openly white supremacist subcommunities like /r/coontown during a prominent sweep, but remarkably, pages like /r/blackpeoplehate live on. Reddit now classifies its most objectionable content as under “quarantine” and requires a verified email address to access it. Like YouTube, which took a similar approach of walling off some content, Reddit “will generate no revenue, including ads or Reddit Gold,” from these subreddits. They live on in a state of partial suspended animation.

Following the violence in Charlottesville, Reddit told me that it banned /r/physical_removal for “a violation of our content policy, specifically, the posting of content that incites violence.” The company appears responsive to user-generated campaigns when they draw sufficient attention to an issue, which appears to be the goal of /r/AgainstHateSubreddits, a compendium of Redditor-reported hate speech.

YouTube

Initially, YouTube’s search made finding white supremacist stuff kind of hard. Given Google’s web search prowess it makes sense that the company would do a better job of burying objectionable content than a site like Facebook, but it wasn’t buried very deep. After a few searches didn’t turn up much, I struck Nazis on a video that prominently displayed a 1488 with a slew of links to the Daily Stormer.

Because it’s an entertainment site as much as a social network, many of my search results were home-brewed music videos depicting Nazi imagery with little or no context. A cursory glance at the user names and links was the only overt hint, with, again, many, many 1488s. Some more narrative racism came with disclaimers that the content was satire or just a joke.

Elsewhere, content drawn directly from 4chan’s infamous far right hub /pol/  (short for “politically incorrect”) was repurposed on a more mainstream platform. Because YouTube, like many of these sites, offers recommended content related to what you’re viewing, stumbling onto a little bit of white supremacy opens up a cascading slide of swastikas and racial epithets. Just a few clicks away from a music video declaring whites the master race I ran into a video created by “fashygamer1488” with the following text:

“Hey goys, its [me] here with another video, please write ur comments below, no (((jews))) or googles allowed (Google is a secret alt-right codeword that means the N word lol)…”

Again, racial slurs are traded for common, unsearchable words to keep the content just barely underground.

In June, YouTube followed Reddit’s example, creating a separate class of objectionable content that it would no longer monetize. This followed a corporate outcry from brands concerned that their ads were being served along with videos containing hate speech. In just a little bit of time spent browsing YouTube’s white supremacist content, I did not run into anything that set this content apart from the rest of its videos, though YouTube has said that feature is coming “soon” and that the “videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes.”

For now, the suggestion engine hums along, pointing me toward a selection of Hitler youth haircut instructional videos.

Fashy haircuts

Twitter

Twitter is more responsive as a search engine than something like Facebook, but the search results are often messily curated. My first search for 1488 quickly pulls up tweets like a picture of a white, blue-eyed baby with the text “14 words” and a photo of Hitler. In other tweets, users with neo-Nazi black sun icons and hybrid Trump/Hitler background images call each other “fags” over who is and is not “boomerposting” (i.e. tweeting like a baby boomer).

Unsurprisingly, Twitter has it all. White supremacist demagogue Richard Spencer trying to remain relevant while his peers accuse him of being a Jew. Quotes hailing Trump’s off-the-rails presser that defended some white supremacists as “fine people.” Racist code words that reverse virtue-signal hate to anyone looking for a like-minded follow. Jokes about cars caked in Photoshopped blood. All of it sends the same message.

On Twitter, there is a lot, lot, lot of this content. It starts to run together.

Tech at a crossroads

These major platforms offer a taste of the toxicity flowing through mainstream social networks, but there are many others. After incubating this kind of stuff for ages, gaming chat platform Discord just finished a major purge. Tumblr, Instagram and Snapchat are fighting the same fight and it’s not clear they’re winning. Meanwhile, far right offshoots like Gab are specifically designed with sustainable white supremacy in mind. The absolute ubiquity of Nazi insignia, Stormfront links and shockingly violent memes would appear to undercut objections by the extreme right that their speech is being suppressed with much success at all.

Depending on how you use the internet, the fact that this stuff is so easy to find on major social networks could range anywhere from shocking to wholly unsurprising. But the truth is that most of us shy away from looking at it. For anyone who isn’t the target demographic, all of this hate is ugly and exhausting. We’d rather just rest easy knowing that tech companies are working on it and they’d rather we didn’t haul up more of this stuff — they’re working on it.

As we can see from tech ratcheting up its response following Charlottesville, no policy is set in stone. While companies often point users to policies around what does and doesn’t fly on their platforms, ultimately the decision to ban content is a subjective response to getting too much heat. Given that willingness to bend to public sentiment, corporate pressure and user-driven anti-hate campaigns are proving themselves to be powerful tools, even if it’s not clear where exactly to draw the line. Racial slurs? Nazi insignia? Overt threats of specific violence? For tech, the coming weeks will be a bellwether.

Anywhere you go, white supremacist content has a foothold if not an entire underground compound bedecked in red and black — one that remains even after the Charlottesville backlash. All one needs to do is look. Whether tech companies choose to see is a different matter altogether.

Mark Zuckerberg slams neo-Nazis and ‘polarization’ after Charlottesville

Zuck has weighed in on Charlottesville.
Zuck has weighed in on Charlottesville.

Image: AP/REX/Shutterstock

We can now add Mark Zuckerberg to the growing list of CEOs and public figures who have weighed in on the events of Charlottesville.

Writing in a Facebook post Wednesday, the CEO said white supremacists and neo-Nazis are a “disgrace,” while criticizing the “polarization in our culture.”

“With the potential for more rallies, we’re watching the situation closely and will take down threats of physical harm,” Zuckerberg wrote. Facebook’s policies have long banned violent threats and hate speech, but the platform has sometimes struggled with enforcement. 

Zuckerberg also specifically called out neo-Nazi and white supremacist groups, saying “it’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious.”

The carefully phrased 326-word post comes four days after violence first kicked off in Charlottesville, and made no reference to Trump or his comments defending some of the protesters.

Zuckerberg also took the opportunity to criticize the “polarization in our culture.” 

“There’s not enough balance, nuance, and depth in our public discourse, and I believe we can do something about that.”

His comments come after months of debate surrounding Facebook’s role in the presidential election and whether the social network contributes to the very polarization Zuckerberg referenced. 

On his part, Zuck — who also happens to be in the midst of a nationwide tour of the U.S that’s definitely not a precursor to a political campaign — has maintained that emphasizing community-focused groups is key to increasing empathy on the platform.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f81238%2ff8584fb5 c957 4360 b409 826b5ca39300

45 million people send birthday wishes on Facebook each day


Roughly 1 in 30 Facebook users tells someone Happy Birthday each day, showing Facebook’s first major emergent behavior is still going strong. Now Facebook is equipping the 45 million people sending birthday wishes each day with some new features.

Now instead of just posting a soulless “HBD” or “Happy Birthday!” on someone’s wall with no personal message, photo, memory, or anything that makes it feel sincere, you can post one of Facebook’s auto-generated, personalized birthday videos. Similar to the ones it shows on your friendversary with different people, the birthday video will show photos of you and the birthday boy/girl with stylized transitions.

These videos could make it just as easy to send something that shows you and a friend’s unique journey through life together as it does to send a generic string of text. That could make sending birthday wishes feel more authentic and valuable, and less like a boring chore. Facebook launched birthday message recap videos last year to aggregate text wall posts from all your friends into something more visual, but now each friend can send a happy birthday video.

And now when it’s your birthday, you can easily dedicate it to a charity. Two weeks before your birthday, you’ll get a prompt to choose from one of 750,000 eligible non-profits vetted by Facebook. Friends will get a notification about your fundraiser, and be able to donate on your behalf as a birthday gift.

Facebook launched the donate button in 2013, and last year let people easily set up personal fundraisers. Facebook has received some flack for charging a 6.9% + $0.30 fee, but that covers processing, security, fraud, and vetting to ensure people are giving to real charities. Facebook has told me this is not a revenue generator, and in fact its fees are lower than what other donation platforms like GoFundMe charge.

Last year Facebook said 100 million birthday wishes per day total. Birthday fundraisers could let people leverage the social obligation some users feel about sending birthday wishes, and turn that sentiment into actual good. It’s nice to see Facebook realize it’s created the “HBD” behavioral norm that wasn’t necessarily delivering much positive outcome or emotional resonance, and turn it into something more beneficial and nostalgia-inducing.