All posts in “Entertainment”

Talking about #MeToo on social media is hard, but we shouldn’t stop trying

When news broke late Saturday that the actor and comedian Aziz Ansari had been accused of sexual misconduct, I had plenty of thoughts but didn’t feel compelled to share a single one of them on social media. 

I partly blame my naturally cautious personality for that; I like to know exactly what I believe before I put it out into the world. But I also watched my timeline on Facebook and Twitter become a deluge of impassioned comments and hot takes. I didn’t want to wade into the churn just yet. I imagine I’m not alone. 

The truth is that talking about #MeToo on social media is challenging work. When we move beyond the declarations of empowerment and must instead analyze the particulars of every case, things get messy. That complexity isn’t easily captured in 280 characters, and comments on even the most thoughtful Facebook post can be hijacked by one grandstanding person who wants to play devil’s advocate. 

Meanwhile, the Ansari case may be the most difficult public conversation we’ve had about sex, power, and consent since Harvey Weinstein’s downfall in October. An actor beloved in part for his “woke bae” bona fides, Ansari wasn’t revealed by the website Babe as a violent serial predator but a sexually aggressive man who read a young woman’s verbal and nonverbal discomfort with his behavior as enthusiasm. 

And so we began debating on social media whether this woman’s encounter could be categorized as sexual assault, as she suggested, or a terrible date, as many others argued. 

I can think of no worse place to have this conversation than on social media. Victim-blaming can go viral. Hot takes (ahem, Atlantic and New York Times) consume all the oxygen in the proverbial room until everyone is rage-choking on the fumes. You can end up reflexively refreshing to see the latest tweet but do very little reflection of your own. 

Yet I also can’t think of a mainstream alternative that allows people to engage in a public reckoning like this, even on a small scale. Some students may discuss the subject in class today, but most adults won’t participate in those conversations. I can only imagine the watercooler whispers at many workplaces — this is not the kind of news that employees feel comfortable openly chatting about. Cable news is a one-way format; you often nod your head in agreement or shout at the TV screen, but you never get to speak your piece to the viewers at home. 

For better or worse, social media is our best shot at collectively having complex discussions about painful and taboo subjects. The good news is that all hope for enlightened dialogue is not lost. Many of the widely shared comments on Ansari’s case hinged on a nuanced perspective, which is not something social media typically rewards. 

Feminist author Jessica Valenti kept emphasizing the importance of nuance when thinking about the implications of what happened between “Grace” (the anonymous moniker given to the 23-year-old woman in Babe’s story) and Ansari. 

“I’m sure we’re going to hear lots of stories in the coming months about actions that aren’t against the law, or that don’t warrant repercussions,” she wrote in one tweet. “That doesn’t mean that women weren’t hurt, or that these stories aren’t worth discussing[.]” 

Even if the account that was published by Babe contained deeply flawed reporting and writing, as some have persuasively argued, the report itself offered an opportunity to address the cycle of appeasement and shame that women so often find themselves in when trying to turn down a man’s sexual advances. 

Writer David Klion, whose comments about Ansari went viral, framed the accusations against Ansari as about something more than the Hollywood celebrity — they are instead a chance to come to terms with one’s views of #MeToo. 

“The Aziz Ansari story is a good litmus test for who sees sexual misconduct as a strictly legal question and who is concerned about improving the overall culture surrounding sex and dating,” he wrote. 

The surprising key to the success of these tweets isn’t outrage for the sake of outrage, but a deep commitment to an interest in how we move forward in the age of #MeToo. And while many bemoaned the change when Twitter increased the character limit from 140 to 280 back in November, the best threads on the Ansari case frequently contained long messages. Brevity may be best when you’re sharing breaking news, but those constraints hinder our ability to speak thoughtfully about how to change human behavior. 

There is no right way to talk about #MeToo on social media. But if the debate that’s taken place since the news broke about Ansari holds any clues, commentary that invites respectful dialogue can play an essential role in framing the broader conversation. That can also provide a refreshing contrast to lengthy essays and opinion pieces that may in fact further stoke anger rather than deliberation. 

We can also surprisingly look to Ansari himself for some guidance. While he didn’t comment on social media about the Babe report, his statement about the accusations acknowledged Grace’s account, conveyed a sense of regret and concern, and even urged further conversation about #MeToo. 

“I continue to support the movement that is happening in our culture,” he said. “It is necessary and long overdue.” 

If Ansari can sit with the discomfort of being the subject of an exposé about his sexual conduct and still encourage a continued discussion about sex and power, the rest of us can find a way to actually have that civil dialogue online. 

If you have experienced sexual abuse, call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting online.rainn.org.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f84012%2fb4805761 c1a2 4995 9c9e da71c9b1fe25

Logan Paul isn’t the only problem. YouTube is broken — here’s how to fix it.

It’s easy to hate on Logan Paul. A 22-year-old YouTuber with 15 million subscribers, Paul made himself the poster child for shallow and gross behavior this weekend by posting a graphic video in which he finds the body of a man who died by suicide in Aokigahara forest, a sacred site at the base of Mt. Fuji. The video gained a million likes before Paul took it down.

Bad enough that Paul yuks it up in the video while dressed in what can only be described as 21st century clown gear, the archetypal ugly American abroad. Worse still was the entirely self-centered apology that came Monday (“This is a first for me, I’ve never faced criticism like this before”). In a second apology, this time on video, Paul claims to be dejected, yet still had time to coif his hair in its perfect proto-Trump configuration. 

But if you focus all your ire on this ridiculous manchild in particular, you’ll miss the bigger picture — which is that YouTube is broken. The Google video subsidiary allows unfiltered young stars to get away with ever-more outlandish provocations because it still has no idea of its power — and is ignoring some very basic common-sense solutions. 

It’s been nearly a year since Felix Kjellberg, a similarly air-headed YouTuber better known as PewDiePie, was hurriedly dropped by Disney. This was because the Wall Street Journal watched a bunch of his YouTube videos, pointing out Kjellberg’s frequent use of swastikas and other Nazi imagery. Intending to shock, he’d paid a bunch of people around the world to hold up anti-Semitic signs. Apparently it took him until the Nazi rally at Charlottesville in August to realize this was a bad idea.

It was easy for Kjellberg to make himself the center of attention in the whole affair. He inserted himself into the Paul affair too, ridiculing his fellow YouTuber’s thought process in a tweet. He’s a troll; that’s what he does. (He has also attracted infamy for yelling the word “rape” repeatedly and using the n-word.) Less noticed was the fact that Nazi-based trolling videos from one of its biggest stars had evidently passed muster with YouTube itself. It should not have taken the Wall Street Journal to point them out. 

YouTube has been routinely made aware of disturbing content by the media, acting only after it starts to generate bad headlines.

But it does. YouTube has been routinely made aware of disturbing content by the media, acting only after it starts to generate bad headlines. At its core, YouTube has a content policing problem. This was made abundantly clear in November when Mashable uncovered dozens of amateur videos aimed at children that featured familiar cartoon characters pulling out automatic weapons. The videos weren’t just featured on regular YouTube; they’d made it to the supposedly safe space of YouTube Kids. 

The video service apologized for this and other disturbing videos featuring child actors. YouTube CEO Susan Wojcicki wrote in a blog post that she aimed to bring “the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.” (This has been repeatedly misreported as YouTube planning on hiring 10,000 moderators.)

Don’t mistake this for some sort of actual reckoning. YouTube is still trying to come up with a way for it to cut humans out of the equation and resume its “don’t blame us, we’re just a platform” role. Wojcicki has placed a lot of faith in the service’s machine-learning algorithm, boasting of many thousands of violent videos that had been removed by its artificial intelligence system. This same obliviousness was evident when YouTube released a statement Tuesday on Logan’s video:

YouTube faces the same challenge as every other platform including Facebook, Twitter, Reddit, and many more. There’s just so much stuff — and so much terrible stuff — that it’s legitimately hard to constantly police. 

What’s different on YouTube is that a person like Paul is particularly amplified due to his following. Missing in YouTube’s statement was the fact that Logan’s video had made the “Trending” section of YouTube unhindered — or that he only had one of three possible strikes against him. His method of making money via advertising on the site, estimated to bring him more than $1 million a month, remains in place.

It doesn’t take 10,000 people to watch all the Trending section’s videos. Wojcicki could theoretically do that herself and not take too much of a dent out of her work day. There are less than 100 content creators on YouTube with more than 10 million subscribers; you could assign five people to watch each of those channels like hawks, 24/7, and still have 9,500 left over to police the less popular parts of the service.

Because here’s the screamingly obvious thing: not all YouTubers are created equally. 

Because here’s the screamingly obvious thing: not all YouTubers are created equally. If a video features violence but has under 100 views, perhaps that should be dealt with after having words with PewDiePie about anti-Semitic content he pushed to 50 million subscribers. Or having a video of a suicide victim on a channel particularly popular with young children.

If more attention is paid by viewers to the top 1 percent of videos, that’s where YouTube should put the bulk of its attention too — maybe even hiring producers to work solely with popular but inexperienced twentysomethings, especially the ones whose on-screen personalities are basically trolls. 

These producers could function as they do in the TV world — as gatekeepers for bad ideas. If a video is insensitive to the very real societal problem of suicide, they could simply refuse to publish it. 

Putting such checks and balances in place runs the risk of losing some of the biggest cash cows on YouTube; after all, the service takes a 60 percent cut of all ads on these videos. This is probably why Wojcicki treated PewDiePie with kid gloves, congratulating him on his 50 million subscribers even as those Nazi and sexual assault joke videos were in plain sight. 

But let’s be real. Where else are they going to go? Vimeo and Daily Motion are nice services, but no channel on them has more than 200,000 subscribers. If you want to be insanely popular, as most of these narcissists do, YouTube is the only game in town. 

Wojcicki is holding all the cards. Logan Paul is entirely dependent on her continued munificence, not the other way around. It’s time YouTube started acting like it — before another dumbass platinum blond twentysomething gets the message that shocking us is a better business model than entertaining us. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f853%2fa0cf48c3 ea9d 4a11 ba01 fc4a154d19e1

Firefox users lose trust in Mozilla after a ‘Mr. Robot’ promo went horribly wrong

Image: NBCU Photo Bank via Getty Images

Protip for Mozilla and the USA Network: In the future when you’re plotting a tie-in for a show about vigilante hackers, maybe don’t actually compromise people’s privacy.

Some Firefox users were none too thrilled to discover that the web browser had installed an add-on called “Looking Glass” without permission. Bearing a description that read simply, in all-caps, “MY REALITY IS JUST DIFFERENT THAN YOURS,” people were understandably suspicious.

(The all-caps utterance, it should be noted, is a quote from Lewis Carroll’s Alice’s Adventures in Wonderland.)

That last tweet is from a Mozilla employee, in case it’s not clear.

Looking Glass turned out to be a promotional tie-in for Mr. Robot, serving as the foundation for a new alternate reality game. But the initial lack of clarity as to its purpose, coupled with the fact that it installed unprompted, caused understandable alarm.

Once it became clear that users were unhappy, Mozilla moved quickly to set things right. The initial 1.0.3 version of the Firefox extension featured the cryptic Carroll quote and nothing else, as TechCrunch noted, but a subsequent 1.0.4 update included texting explaining its purpose as an ARG.

Mozilla also created a support page to more thoroughly explain Looking Glass, and make it clear that users would have to opt in if they wanted to participate in the ARG. On top of that, the support page includes a vague mea culpa that lays out Mozilla’s mission and commitment to giving people “more control over their lives online.”

The Mr. Robot series centers around the theme of online privacy and security. One of the 10 guiding principles of Mozilla’s mission is that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. The more people know about what information they are sharing online, the more they can protect their privacy.

Mozilla exists to build the Internet as a public resource accessible to all because we believe open and free is better than closed and controlled. We build products like Firefox to give people more control over their lives online.

Looking Glass didn’t self-install in every version of Firefox, and as the conversation around it grew, users began to figure out what happened. The extension is a product of Mozilla’s Shield Studies program, which is a “user testing platform for proposed, new and existing features and ideas.”

While some Shield Studies testing items prompt users for approval before installing, others are added automatically and require a manual opt-out. And, as some discovered, it’s possible to participate in Shield Studies without specifically opting in (h/t Engadget).

Ultimately, Looking Glass doesn’t actually do anything unless the user in question chooses to participate in the Mr. Robot ARG. But this is a trust issue more than anything else. And with Mozilla’s Firefox Quantum update freshly launched — and vying to bring back users stolen away by Google Chrome — this secretly added extension is not a good look.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f83773%2fe7fd392b 03d9 4164 a2de 08e6204fbdea

SoundCloud resuscitates home screen with personalized playlists


After laying off 40% of its staff and securing $169 million in emergency funding, SoundCloud’s first product push is a much needed step in the right direction. It’s filling its home page with personalized playlists and best of collections from top genres rather than other content like it’s feed. The hope is to make SoundCloud instantly accessible to new users and a more reliable place to discover fresh songs for long-time loyalists.

In my recent deep-dive into SoundCloud’s strategy, I outlined its need to differentiate itself from Spotify by focusing on its $5 tier of ad-free access to independent music, legally grey remixes and DJ sets you can’t find elsewhere, and helping artists earn money beyond royalties through commerce. In line with that strategy, today’s redesign lets SoundCloud highlight the best of its unique archive of user uploaded songs.

SoundCloud’s new CEO Kerry Trainor tells me “The new SoundCloud home is a more curated, personalized way to discover amazing creators first—years ahead of the mainstream charts”.

Read TechCrunch’s feature piece “To fix SoundCloud, it must become the anti-Spotify

Listeners will now be greeted by featured playlists including Hip Hop Supreme and DJ mixset-focused In The Mix, the Spotify Discover Weekly-esque personalized tracklist The Upload, and algorithmically generated More Of What You Like and Artists You Should Know. There’s also New & Hot charts and Top 50 charts playlists, Fresh Pressed for new album releases, and editorially selected collections like SoundCloud Next Wave and Playback.

The home screen refresh is a direly needed change. Yesterday I opened SoundCloud to see two big ads filling my screen.

With funding and a leaner operation giving SoundCloud some runway after years of sluggish performance, it’s up to Trainor to give SoundCloud some momentum. Interface changes are an easy way to start, though a deeper repositioning of SoundCloud around indie creators that its competitors lack will be important.

“What differentiates SoundCloud is our catalog of over 170 million tracks, and new home lets us elevate and celebrate the incredible talent that drives the SoundCloud experience” says Trainor. With Spotify, Apple, Google’s YouTube, Amazon, and Pandora duking it out, the music space is crowded. SoundCloud’s success lies in being what others aren’t.

For more on SoundCloud, check out our post with three ways it can bounceback and become the “anti-Spotify”

Why content crowdfunder Patreon is halting its hated fee change


Overhauling the creative economy turns out to be quite tricky. Following massive backlash, Patreon is at least temporarily pausing its plan to change its payment processing fee structure on December 18th from charging creators 2%-10% on the first of the next month to charging patrons 2.9% plus $0.35 per transaction up front and on the monthly anniversary of their first pledge beyond the 5% Patreon takes. “We messed up. We’re sorry, and we’re not rolling out the fees change” CEO Jack Conte writes. He tells me “We got in the way of the creators and their fans.”

The subscription content crowdfunding company’s goal was to prevent patrons from being able to sign up and get access to exclusive content and then cancel their subscription before paying on the first of the next month, and to avoid users being charged immediately and then again on the first of the next month so they’d essentially be doubled billed if they pledged near the end of the month. Pro-rating wouldn’t work either since patrons could sign up for a big $100 a month subscription, experience super-premium access to content, then cancel a day later having only paid $3.

Patreon CEO Jack Conte

But the problem with the fee change was that it prevented batching payments so patrons would pay just one processing fee for all their different pledges, and instead charged patrons the fee on every different campaign they support. This significantly boosted fees for patrons who only pledge a dollar or two per campaign, and ones that pledge to multiple campaigns and therefore get charged multiple fees. Patreon also admits it didn’t get enough direct feedback from creators and rushed a mere two week timeline for implementing the change.

So after a painful week of calls with creators who said patrons had cancelled their subscriptions to avoid the higher fees, Patreon is halting the change until it can receive more feedback and find a better path forward. The retreat shows a level of maturity at the company, even if also some lack of foresight.

Patreon raised $60 million earlier this year at a $450 million valuation to build out more monetization tools for content creators from video makers and comedians to illustrators and models. With $107 million in funding, many assumed Patreon was on stable enough financial footing to avoid having to change from its existing fee structure where it takes a mere 5% rake — compared to typical 30% charged by platforms like Apple and Google’s App Stores and 45% charged by Facebook and YouTube for ad revenue shares.

But “the system that my co-founder came up with 4 years ago in 25 days” needs to be updated. Patreon has to abide by credit card processing rules while keeping enough revenue to stay alive. Conte tells me Patreon still has a bunch of new premium tools in the works for creators, a storefront for selling merchandise for example, that will be unveiled in the coming year and will help it earn more money to keep the platform sustainable. But many creators surely construed the payment structure change as a way for Patreon to jack up fees.

The episode demonstrates just how tenuous it can be to alter the foundations of monetization systems that independent creators rely on. YouTube has had its own problems with creator backlash after pulling ads and demonetizing more videos and creators in order to appeal to family-friendly advertisers. Conte tells me the plan going forward is to “Work more with creators one-on-one, show them the problems that we’re trying to address, get more feedback earlier, give creators more lead time, do more qualitative research . . . and honor the idea of letting creators own their relationships and run their businesses the way they want to run them.”

After speaking at length with Conte, though, Patreon seems stuck between a rock and a hard place. Keeping the fee structure sustainable for the startup, preventing creators from having their content accessed without fair payment, and avoiding overcharging patrons for processing fees or fractions of a month of access seems somewhat intractible, otherwise the company would have come in with a better solution than its first attempt. We’ll see if it can create something that works for everyone.