All posts in “Twitter”

Twitter totally has a roadmap to curb abuse, and the company just shared it

Twitter prides itself on being “what’s happening,” but unfortunately for the company’s users, what’s frequently happening is unchecked harassment. CEO Jack Dorsey apparently has plans to change all that, and today put forth a roadmap for curbing abuse on the social media platform. 

In an Oct. 19 post, the Twitter Safety team published a detailed calendar listing target dates and goals for changing the site’s rules. Taking it a step further, Twitter promised to share “regular, real-time updates” on its efforts to make the service “a safer place.” 

To kick things off, starting in late October, Twitter intends to alter its policies regarding “non-consensual nudity” and the manner in which it handles suspension appeals. 

“We are expanding our definition of non-consensual nudity to err on the side of protecting victims and include content where the victim may not be aware that the images were taken (this includes content such as upskirt photos, hidden webcams),” the page explains. “Anyone we identify as the original poster of non-consensual nudity will be suspended immediately.” 

Not all terms of service violations, however, are as clear cut as someone posting creepshots. There have been numerous high-profile incidents of people being suspended for seemingly absurd reasons, and the company explained that it will make the process of appealing those suspensions more transparent. 

“If an account is suspended for abuse, the account owner can appeal the verdict,” notes the calendar. “If we did not make an error, we will respond to appeals with detailed descriptions of how the account violated the rules.”

And if Twitter did make an error? Presumably, it will reverse course — although this document doesn’t detail that process. 

Image: Twitter

Those two changes, slated to go into effect on Oct. 27, are a big first step. But they are just that — a first step. The company has a more complete list of planned actions for November, December, and January, including something called “Witness Reporting.”

The idea behind this is in line with the release of the roadmap itself — it’s all about transparency. When someone reports, say, harassment on Twitter, that reporter frequently has no idea what steps (if any) Twitter has taken in response. It can feel a bit like shouting into a void, and the company wants to change that. 

“Currently we only send notifications (in-app and email) to people who submit first-person reports,” notes the Safety Team. “We will notify the reporter of a tweet when that report that comes from someone who witnesses an abusive/negative interaction.”

Basically, Twitter is going to start telling you that it heard you, and that it’s (theoretically) doing something about it. 

Image: twitter

But will any of this be enough to substantively address Twitter’s very real problems? Predicting the future of the internet is an exceedingly tricky proposition, but Dorsey is clearly hoping that allowing us a peek behind the curtain will engender some trust that his company is, at the very least, actively working to make the platform a better place. 

In the end, only time will tell. Thankfully we have a Twitter-provided calendar to check off the dates.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f82232%2f517b80db 8ea7 4318 9d1b bbac0c1e4bf1

Twitter bans ‘Hateful Display Names’ and shares Safety road map


Twitter has committed to a specific timeline for rolling out changes to its Safety features, and announced new policies, including a ban on hateful display names, and improvements for second-hand “witness reporting” of abuse.

By January, Twitter plans to have implemented all the abuse changes outlined in the internal email published by Wired earlier this week, as well as the new ones shared today. The company even apologized for frequently promising improvements but then failing to take action, writing, “Far too often in the past we’ve said we’d do better and promised transparency but have fallen short in our efforts.”

Here’s a breakdown of what’s new, beyond the enhancements to existing safety features:

  • Hateful Display Names – The ban on hateful display names could deter or punish people for “nameflaming” other users, wherein when a quote is tweeted by a critic, someone changes their display name to insult the critic, thereby having that insult show up to all the critic’s followers who see the quote tweet.
  • Witness Reporting – Twitter will use how you’re related to the victim and abuser when you to more strictly enforce rules against harassment. This could help ensure reports aren’t actually concerted trolling efforts and are instead coming from people legitimately offended by an abusive tweet. Twitter also will send notifications in-app and via email to second-hand reporters of abuse. This closing of the loop should boost people’s sense of safety on the platform even if they aren’t the victim in this instance.
  • Content Rules – Violent groups will be banned, hateful symbols in avatars and profile headers will be banned while this content in tweets will be obscured with an interstitial warning, account relationship signals will be used to determine if sexual advances were unwanted, spam will be better defined and technology will be adopted to prioritize the most egregious violations of these rules.

Here’s the calendar:

What’s missing

The most glaring gap in this road map is any functional change to the way that Twitter users interact. As we wrote about last week, and as had been suggested by Hunter Walk, Twitter’s biggest opportunity to shut down abuse lies in changing how replies work.

Right now, Twitter leaves it up to users to choose to mute replies from certain accounts, like ones that don’t follow them, have a newly set up account or that haven’t added a profile image, confirmed email or confirmed phone number. But the devil is the defaults that leave these off. Meanwhile, hard-set rules chosen by users could accidentally silence innocent replies.

Twitter should consider turning on some of these rules by default while warning repliers that their messages might not get through unless they complete their profiles. That’s important, because registering a phone number in particular makes it tough for trolls to abandon a suspended account and simply harass people from a different handle.

By using a combination of signals, Twitter could start more aggressively filtering out replies from suspected abusers, yet give people a path to regaining the ability to @ people by taking actions that introduce friction for trolls. Though it might take a little while to get right, and some benign content may be unnecessarily censored, right now the balance is far too skewed toward a laissez-faire approach that permits harassment.

For more on how tech could fight abuse, check out our feature article Silenced by ‘free speech.’

Featured Image: Bryce Durbin

Obsessively checking social media during a crisis might harm your mental health

Survivors of three recent disasters — the northern California fires, the Las Vegas mass shooting, and Hurricane Maria — used social media and texting as lifelines to connect with loved ones, seek aid, and search for the latest developments. 

A new study, however, suggests that people who get updates during a major crisis from unofficial channels like random social media accounts are most exposed to conflicting information and experience the most psychological distress. 

The study, published in Proceedings of National Academy of Sciences, surveyed 3,890 students whose campus was locked down after a shooter fired on people. Since it’s difficult, if not impossible, to begin a scientific study during a life-threatening disaster or crisis, the researchers asked students about their experience a week after the incident and analyzed five hours of Twitter data about the shooting. (Details about what happened were anonymized at the university’s request.) 

“If random people you don’t know are tweeting information that seems really scary, that’s anxiety-provoking.” 

“If random people you don’t know are tweeting information that seems really scary — and, in particular, if you’re in a lockdown and someone is tweeting about multiple shooters — that’s anxiety-provoking,” says Nickolas M. Jones, the study’s lead author and a doctoral candidate at the University of California, Irvine. 

While nearly everyone said they turned to officials like school authorities and the police, some people reported seeking more information from other sources, including social media, family, and friends. The researchers found that the people who most sought and believed updates from loved ones and social media encountered the most misinformation. They also said they felt more anxiety; heavy social media users who trusted online information, in particular, felt extreme stress. People who relied more on traditional media sources like radio and television didn’t have the same experience.

Jones says that people might turn to social media to feel more control in the midst of a crisis, especially if authorities aren’t sharing regular updates. But that sense of control just might be an illusion if someone instead sees rumors and conflicting information and feels more anxious as a result. 

“You’re going to feel something no matter what because you’re a human being,” says Jones. “Where you go from there to mitigate anxiety is what really matters.”

In other words, it’s perfectly normal to seek information from any available source and to have an emotional response to rapidly unfolding events. But people who feel helpless during a crisis may be primed to see patterns where none exist, making rumors and misinformation particularly dangerous. Their ability to process and scrutinize information may also be diminished. 

While Jones and his co-authors only surveyed those affected first-hand by the lockdown, he believes the public might experience a similar dynamic during crises. Think, for example, of the last time you scrolled through social media during a disaster and tried to sort through confusing accounts and rumors. It’s probably not that hard to recall a sense of creeping anxiety. 

Part of the broader problem is that the public now seems to expect fast and frequent updates thanks to the speed of social media, but authorities often still operate with tremendous caution. In the campus shooter case, 90 minutes transpired between two official updates from the police. During the entire incident, Jones and his co-authors found that a handful of false rumors were retweeted hundreds of times, including information about multiple shooters and what they were wearing. 

The study’s authors recommend that emergency management officials stay in regular contact with people. Even if they don’t have new information, they can still send messages that help alleviate anxiety and uncertainty by addressing the situation and reassuring the public. They should also monitor social media for rumors and “tackle them head on,” says Jones.

The Federal Emergency Management Agency, for example, compiled a list of debunked rumors regarding Hurricane Maria recovery efforts in Puerto Rico. The city of Santa Rosa and Sonoma County, both of which were devastated by fires in Northern California last week, posted tweets to address rumors. Efforts like these are crucial. It’s equally important to ensure people can actually access official websites, social media pages, and text message updates in the midst of a disaster. 

But the bottom line, says Jones, is learning to seek news carefully: “For anybody who’s turning to social media to get critical updates during a crisis, I think they just need to be skeptical about some of the information they’re seeing from unofficial sources.” 

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f10%2f037f8e2b 288d 6ba3%2fthumb%2f00001

The internet is very confused by Ariana Grande’s album cover

Twitter is hilariously attempting to recreate Ariana Grande’s album cover for ‘My Everything.’ The challenge started when @McJesse on Twitter questioned just how Grande balanced her entire body on a small stool. Other people followed after, posting photos of them trying the same pose.

Twitter is done with hate symbols and violent groups


Twitter, a platform infested with trolls, hate and abuse, can be one of the worst places on the internet. As a followup to Twitter CEO Jack Dorsey’s tweetstorm last week, in which he promised to crack down on hate and abuse by implementing more aggressive rules, Twitter is gearing up to roll out some updates in the coming weeks, Wired reported earlier today.

“Although we planned on sharing these updates later this week, we hope our approach and upcoming changes, as well as our collaboration with the Trust and Safety Council, show how seriously we are rethinking our rules and how quickly we’re moving to update our policies and how we enforce them,” Twitter said in a statement to TechCrunch.

In an email to members of Twitter’s Trust and Safety Council, Twitter’s head of safety policy outlined some of the company’s new approaches to abuse. Twitter’s policies have not specifically addressed hate symbols and imagery, violent groups and tweets that glorify violence, but that will soon change.

Twitter has not yet defined what the policy around hate symbols will cover but “At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media” — similar to the way Twitter handles adult content and graphic violence, the email stated.

With violent groups (think alt-right groups), Twitter “will take enforcement action against organizations that use/have historically used violence as a means to advance their cause.” Twitter has yet to outline the parameters it will use to identify such groups.

While Twitter already takes action against people who threaten violence, the company is going to take it a step further and take action against tweets that glorify violence, like “Murdering makes sense. That way they won’t be a drain on social services,” according to the email.

Meanwhile, updates to existing policies will address non-consensual nudity (“creep shots”) and unwanted sexual advances.

On non-consensual nudity:

We will immediately and permanently suspend any account we identify as the original poster/source of non-consensual nudity and/or if a user makes it clear they are intentionally posting said content to harass their target. We will do a full account review whenever we receive a Tweet-level report about non-consensual nudity. If the account appears to be dedicated to posting non-consensual nudity then we will suspend the entire account immediately.

On unwanted sexual advances:

We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone directly involved in the conversation. Once our improvements to bystander reporting go live, we will also leverage past interaction signals (eg things like block, mute, etc) to help determine whether something may be unwanted and action the content accordingly.

“We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service,” Twitter’s head of policy wrote. “We are comfortable making this decision, assuming that we will only be removing abusive content that violates our Rules. To help ensure this is the case, our product and operational teams will be investing heavily in improving our appeals process and turnaround times for their reviews.”

Here’s the full email:

Dear Trust & Safety Council members,

I’d like to follow up on Jack’s Friday night Tweetstorm about upcoming policy and enforcement changes.  Some of these have already been discussed with you via previous conversations about the Twitter Rules update. Others are the result of internal conversations that we had throughout last week.

Here’s some more information about the policies Jack mentioned as well as a few other updates that we’ll be rolling out in the weeks ahead.

Non-consensual nudity

  • Current approach

    • We treat people who are the original, malicious posters of non-consensual nudity the same as we do people who may unknowingly Tweet the content. In both instances, people are required to delete the Tweet(s) in question and are temporarily locked out of their accounts. They are permanently suspended if they post non-consensual nudity again.

  • Updated approach

    • We will immediately and permanently suspend any account we identify as the original poster/source of non-consensual nudity and/or if a user makes it clear they are intentionally posting said content to harass their target.

    • We will do a full account review whenever we receive a Tweet-level report about non-consensual nudity. If the account appears to be dedicated to posting non-consensual nudity then we will suspend the entire account immediately.

    • Our definition of “non-consensual nudity” is expanding to more broadly include content like upskirt imagery, “creep shots,” and hidden camera content. Given that people appearing in this content often do not know the material exists, we will not require a report from a target in order to remove it. While we recognize there’s an entire genre of pornography dedicated to this type of content, it’s nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually. We would rather error on the side of protecting victims and removing this type of content when we become aware of it.

Unwanted sexual advances

  • Current approach

    • Pornographic content is generally permitted on Twitter, and it’s challenging to know whether or not sexually charged conversations and/or the exchange of sexual media may be wanted. To help infer whether or not a conversation is consensual, we currently rely on and take enforcement action only if/when we receive a report from a participant in the conversation.

  • Updated approach

    • We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone directly involved in the conversation. Once our improvements to bystander reporting go live, we will also leverage past interaction signals (eg things like block, mute, etc) to help determine whether something may be unwanted and action the content accordingly.

Hate symbols and imagery (new)

  • We are still defining the exact scope of what will be covered by this policy. At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media (similar to how we handle and enforce adult content and graphic violence).

  • More details to come.

Violent groups (new)

  • We are still defining the exact scope of what will be covered by this policy. At a high level, we will take enforcement action against organizations that use/have historically used violence as a means to advance their cause.

  • More details to come here as well (including insight into the factors we will consider to identify such groups).

Tweets that glorify violence (new)

  • We already take enforcement action against direct violent threats (“I’m going to kill you”), vague violent threats (“Someone should kill you”) and wishes/hopes of serious physical harm, death, or disease (“I hope someone kills you”). Moving forward, we will also take action against content that glorifies (“Praise be to <terrorist name> for shooting up <event>. He’s a hero!”) and/or condones (“Murdering <x group of people> makes sense. That way they won’t be a drain on social services”).

  • More details to come.

We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service. We are comfortable making this decision, assuming that we will only be removing abusive content that violates our Rules. To help ensure this is the case, our product and operational teams will be investing heavily in improving our appeals process and turnaround times for their reviews.

In addition to launching new policies, updating enforcement processes and improving our appeals process, we have to do a better job explaining our policies and setting expectations for acceptable behavior on our service. In the coming weeks, we will be:

  • updating the Twitter Rules as we previously discussed (+ adding in these new policies)

  • updating the Twitter media policy to explain what we consider to be adult content, graphic violence, and hate symbols.

  • launching a standalone Help Center page to explain the factors we consider when making enforcement decisions and describe our range of enforcement options

  • launching new policy-specific Help Center pages to describe each policy in greater detail, provide examples of what crosses the line, and set expectations for enforcement consequences

  • Updating outbound language to people who violate our policies (what we say when accounts are locked, suspended, appealed, etc).

We have a lot of work ahead of us and will definitely be turning to you all for guidance in the weeks ahead. We will do our best to keep you looped in on our progress.

All the best,
Head of Safety Policy