All posts in “social media”

Twitter totally has a roadmap to curb abuse, and the company just shared it

Twitter prides itself on being “what’s happening,” but unfortunately for the company’s users, what’s frequently happening is unchecked harassment. CEO Jack Dorsey apparently has plans to change all that, and today put forth a roadmap for curbing abuse on the social media platform. 

In an Oct. 19 post, the Twitter Safety team published a detailed calendar listing target dates and goals for changing the site’s rules. Taking it a step further, Twitter promised to share “regular, real-time updates” on its efforts to make the service “a safer place.” 

To kick things off, starting in late October, Twitter intends to alter its policies regarding “non-consensual nudity” and the manner in which it handles suspension appeals. 

“We are expanding our definition of non-consensual nudity to err on the side of protecting victims and include content where the victim may not be aware that the images were taken (this includes content such as upskirt photos, hidden webcams),” the page explains. “Anyone we identify as the original poster of non-consensual nudity will be suspended immediately.” 

Not all terms of service violations, however, are as clear cut as someone posting creepshots. There have been numerous high-profile incidents of people being suspended for seemingly absurd reasons, and the company explained that it will make the process of appealing those suspensions more transparent. 

“If an account is suspended for abuse, the account owner can appeal the verdict,” notes the calendar. “If we did not make an error, we will respond to appeals with detailed descriptions of how the account violated the rules.”

And if Twitter did make an error? Presumably, it will reverse course — although this document doesn’t detail that process. 

Image: Twitter

Those two changes, slated to go into effect on Oct. 27, are a big first step. But they are just that — a first step. The company has a more complete list of planned actions for November, December, and January, including something called “Witness Reporting.”

The idea behind this is in line with the release of the roadmap itself — it’s all about transparency. When someone reports, say, harassment on Twitter, that reporter frequently has no idea what steps (if any) Twitter has taken in response. It can feel a bit like shouting into a void, and the company wants to change that. 

“Currently we only send notifications (in-app and email) to people who submit first-person reports,” notes the Safety Team. “We will notify the reporter of a tweet when that report that comes from someone who witnesses an abusive/negative interaction.”

Basically, Twitter is going to start telling you that it heard you, and that it’s (theoretically) doing something about it. 

Image: twitter

But will any of this be enough to substantively address Twitter’s very real problems? Predicting the future of the internet is an exceedingly tricky proposition, but Dorsey is clearly hoping that allowing us a peek behind the curtain will engender some trust that his company is, at the very least, actively working to make the platform a better place. 

In the end, only time will tell. Thankfully we have a Twitter-provided calendar to check off the dates.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f82232%2f517b80db 8ea7 4318 9d1b bbac0c1e4bf1

Obsessively checking social media during a crisis might harm your mental health

Survivors of three recent disasters — the northern California fires, the Las Vegas mass shooting, and Hurricane Maria — used social media and texting as lifelines to connect with loved ones, seek aid, and search for the latest developments. 

A new study, however, suggests that people who get updates during a major crisis from unofficial channels like random social media accounts are most exposed to conflicting information and experience the most psychological distress. 

The study, published in Proceedings of National Academy of Sciences, surveyed 3,890 students whose campus was locked down after a shooter fired on people. Since it’s difficult, if not impossible, to begin a scientific study during a life-threatening disaster or crisis, the researchers asked students about their experience a week after the incident and analyzed five hours of Twitter data about the shooting. (Details about what happened were anonymized at the university’s request.) 

“If random people you don’t know are tweeting information that seems really scary, that’s anxiety-provoking.” 

“If random people you don’t know are tweeting information that seems really scary — and, in particular, if you’re in a lockdown and someone is tweeting about multiple shooters — that’s anxiety-provoking,” says Nickolas M. Jones, the study’s lead author and a doctoral candidate at the University of California, Irvine. 

While nearly everyone said they turned to officials like school authorities and the police, some people reported seeking more information from other sources, including social media, family, and friends. The researchers found that the people who most sought and believed updates from loved ones and social media encountered the most misinformation. They also said they felt more anxiety; heavy social media users who trusted online information, in particular, felt extreme stress. People who relied more on traditional media sources like radio and television didn’t have the same experience.

Jones says that people might turn to social media to feel more control in the midst of a crisis, especially if authorities aren’t sharing regular updates. But that sense of control just might be an illusion if someone instead sees rumors and conflicting information and feels more anxious as a result. 

“You’re going to feel something no matter what because you’re a human being,” says Jones. “Where you go from there to mitigate anxiety is what really matters.”

In other words, it’s perfectly normal to seek information from any available source and to have an emotional response to rapidly unfolding events. But people who feel helpless during a crisis may be primed to see patterns where none exist, making rumors and misinformation particularly dangerous. Their ability to process and scrutinize information may also be diminished. 

While Jones and his co-authors only surveyed those affected first-hand by the lockdown, he believes the public might experience a similar dynamic during crises. Think, for example, of the last time you scrolled through social media during a disaster and tried to sort through confusing accounts and rumors. It’s probably not that hard to recall a sense of creeping anxiety. 

Part of the broader problem is that the public now seems to expect fast and frequent updates thanks to the speed of social media, but authorities often still operate with tremendous caution. In the campus shooter case, 90 minutes transpired between two official updates from the police. During the entire incident, Jones and his co-authors found that a handful of false rumors were retweeted hundreds of times, including information about multiple shooters and what they were wearing. 

The study’s authors recommend that emergency management officials stay in regular contact with people. Even if they don’t have new information, they can still send messages that help alleviate anxiety and uncertainty by addressing the situation and reassuring the public. They should also monitor social media for rumors and “tackle them head on,” says Jones.

The Federal Emergency Management Agency, for example, compiled a list of debunked rumors regarding Hurricane Maria recovery efforts in Puerto Rico. The city of Santa Rosa and Sonoma County, both of which were devastated by fires in Northern California last week, posted tweets to address rumors. Efforts like these are crucial. It’s equally important to ensure people can actually access official websites, social media pages, and text message updates in the midst of a disaster. 

But the bottom line, says Jones, is learning to seek news carefully: “For anybody who’s turning to social media to get critical updates during a crisis, I think they just need to be skeptical about some of the information they’re seeing from unofficial sources.” 

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f10%2f037f8e2b 288d 6ba3%2fthumb%2f00001

Australia launches a world-first national reporting tool for revenge porn

Taking steps to reporting and removing revenge porn, a.k.a image-based abuse, can be arduous, both emotionally and regarding the amount of steps required to get it done.

Australia’s government is aiming to make the process simpler, with the launch of a national portal for reporting instances of image-based abuse.

The portal will allow victims to report revenge porn online, and provide immediate access to support that had been previously been unavailable, according to a statement. A pilot phase will examine the complexity and the volume of the reports before the portal officially launches early next year.

The Australian government has pledged A$4.8 million (US$3.84 million) dedicated to the portal’s development, as part of a A$10 million (US$8 million) plan to tackle image-based abuse. 

State governments around Australia have been moving quickly to criminalise the non-consensual distribution of intimate images. 

New South Wales and the Australian Capital Territory recently introduced laws, catching up with Victoria, South Australia and Western Australia which have revenge porn legislation. 

Queensland, the Northern Territory and Tasmania are yet to have specific laws on revenge porn, nor is there legislation on a federal level. However, the Australian government is looking closely at specific penalties for image-based abuse.

One in five Australians are victims of image-based abuse, according to a survey by RMIT earlier this year. The figure is more drastic for Indigenous Australians (one in two), people with a disability (one in two), and LGBTQ Australians (one in three).

While a necessary measure, laws and portals can only be reactive to abuse. 

The onus is really on creepers to stop sharing images, but also the platforms which facilitate distribution — like Facebook, who introduced photo matching tech to combat revenge porn earlier this year.

[h/t Gizmodo]

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f10%2fca235d68 64b8 f0b8%2fthumb%2f00001

If you want to grow your business, start by increasing your social media following

Image: pixabay

You’ve seen the job listings: “we’re looking for a social media rockstar…” 

Social media is driving huge amounts of revenue for companies across industries, and the job market is reflecting that value. This Social Media Rockstar Bundle is one way to learn the skills that check recruiters’ boxes, so you can be that rockstar and land an awesome job in social media.

Here’s a breakdown of the included courses: 

The Ultimate YouTube Diva Course: Get Paid to Make Videos

Everyone and their mother seems to have a YouTube channel, but how many of them are actually successful? This six-hour course will teach you how to seamlessly upload videos made with professional equipment, gain followers using SEO tactics and Google AdWords, and create that tutorial you’ve been dying to make.

Learn the Secrets of Facebook Marketing Pros

Facebook is no longer just for reading bad political takes from your high school friends. This course will teach you how to get people to engage with your marketing page, and how to create posts with impact. 

Image: Pexels

Your A-Z Guide to Making Cash on Social Media

You have a Facebook, Twitter, Snapchat, Instagram, Pinterest, LinkedIn, Google+ and YouTube account. But what does it get you? With this guide, you’ll be able to build a brand that gets you actual followers and start making money off your content instead of just sending snaps of your desk lunch to mom.

The Complete Twitter Marketing Bootcamp 2017

This boot camp’s 31 lectures will teach you everything you need to know about attracting Twitter followers, scheduling tweets, implementing Twitter ads, and generating business leads. 

The Complete Instagram Marketing 2017 Training

Like the Twitter boot camp, this Instagram training guide will teach you how to stick to a daily posting schedule with a marketing game plan so you can attract and build relationships with followers. You’ll also learn about optimizing Instagram ads.

How To Use Snapchat For Marketing In 2017

These 32 lectures on Snapchat marketing will make you a pro in no time. Sure, you know the basics: filters, geotags, bitmoji… but this course will teach you how to build a following, integrate Snapchat with other social media platforms, and measure your success.

The Guide to Pinterest Marketing

Pinterest is no longer just for crafty DIYers. Learn how to use Pinterest as a marketing channel and get more than 1,000 repins on a single post. These lessons will teach you how to maximize your results from sponsored ads and drive engagement with pins.

Buying all of the courses in the Social Media Rockstar Bundle separately would set you back $1,387, but right now they’re available as a bundle for just $29. Mashable readers can also save an additional 50 percent off their order by using coupon code: BUNDLE50 at checkout.

Here’s how to kick nazis off your Twitter right now


While you wait for Twitter to roll out “more aggressive” rules regarding hate speech, which CEO Jack Dorsey promised are coming within “weeks” as of late Friday, here’s a quick workaround to kick nazis off of your Twitter feed right now: Go to the ‘Settings and privacy’ page and under the ‘Content’ section set the country to Germany (or France).

This switches on Twitter’s per country nazi-blocking filter which the company built, all the way back in 2012, to comply with specific European hate speech laws that prohibit pro-Nazi content because, y’know, World War II.

Switching the country in your Twitter settings doesn’t change the language, just the legal jurisdiction. So, basically, you get the same Twitter experience, just without so many of the Swastika wielding nazis.

In Germany incitement to hatred is deemed a criminal offense against public order, and nazi imagery, expressions of antisemitism, Holocaust denial and so on are effectively banned in the country.

Free speech is protected in the German constitution but the line is drawn at outlawed speech — which, as programmer and blogger Kevin Marks has noted, is actually a result of the post-war political settlement applied by the triumphant allied forces — led by, er, the U.S…

In a further irony, Twitter’s nazi blocking filter gained viral attention on Twitter last week when a Twitter user creatively couched it: “One Weird Trick to get nazi imagery off Twitter”. At the time of writing her tweet has racked up 16,000 likes and 6.6k retweets:

Dorsey’s pledge of effective action against hate tweets followed yet another storm of criticism about how Twitter continues to enable harassment and abuse via its platform. Which in turn led to a spontaneous 24 hour boycott on Friday. Just before Dorsey tweet stormed to say the company would be rolling out new rules around “unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence”.

(i.e. the stuff women and other victims of online harassment have been telling Twitter to do for years and years.)

Yet in 2012, when Twitter announced the rollout of per country content blocking, it was absolutely sticking to its free speech guns that the “tweets still must flow” — i.e. even nazi hate speech tweets, just in all other markets where this kind of hateful content is not literally illegal.

Indeed, Twitter said then that its rational for developing per country blocking was to minimize the strictures on free speech across its entire platform. Meaning that censured content (such as nazi hate tweets) would only be blocked for the smallest possible number of Twitter users.

“Starting today, we give ourselves the ability to reactively withhold content from users in a specific country — while keeping it available in the rest of the world. We have also built in a way to communicate transparently to users when content is withheld, and why,” the company wrote in 2012, saying also that it would “evaluate each request [to withhold content] before taking any action”.

So Twitter’s nazi filter was certainly not designed to be pro-active about blocking hate speech — but merely to react to specific, verified legal complaints.

“One of our core values as a company is to defend and respect each user’s voice. We try to keep content up wherever and whenever we can, and we will be transparent with users when we can’t. The Tweets must continue to flow,” it wrote then.

“We’ve been working to reduce the scope of withholding, while increasing transparency, for a while,” it went on to say, explaining the timing of the move. “We have users all over the world and wanted to find a way to deal with requests in the least restrictive way.”

More than five years on from Twitter’s restated conviction that “tweets still must flow”, tech platforms are increasingly under attack for failing to take responsibility for pro-actively moderating content on their platforms across a wide range of issues, from abuse and hate speech; to extremist propaganda and other illegal content; to economically incentivized misinformation; to politically incentivized disinformation.

It’s fair to say that the political climate around online content has shifted as the usage and power of the platforms have grown, and as they have displaced and eroded the position of traditional media.

To the point where a phrase like “the tweets must flow” now carries the unmistakable whiff of effluent. Because social media is in the spotlight as a feeder of anti-social, anti-civic impacts, and public opinion about the societal benefits of these platforms appears to be skewing towards the negative.

So perhaps Twitter’s management really has finally arrived at the realization that if, as a content distribution platform, you allow hateful ideas to go unchallenged on your platform then your platform will become synonymous with the hateful content it is distributing — and will be perceived, by large swathes of your user-base, as a hateful place to be exactly because you are allowing and enabling abuse to take place under the banner of an ill-thought-through notion that the “tweets must flow”.

Yesterday Dorsey claimed Twitter has been working on trying to “counteract” the problem of voices of abuse victims being silenced on its platform for (he said) the past two years. So presumably that dates from about the time former CEO Dick Costolo sent that memo — admitting Twitter ‘sucks at dealing with abuse’.

Although that was actually February 2015. Ergo, more than two years ago. So the question of why it’s taken Twitter so very long to figure out that enabling abuse also really sucks as a business strategy is still in need of a definitive answer.

“We prioritized this in 2016. We updated our policies and increased the size of our teams. It wasn’t enough,” Dorsey tweeted on Friday. “In 2017 we made it our top priority and made a lot of progress.

“Today we saw voices silencing themselves and voices speaking out because we’re still not doing enough.”

He did not offer any deeper, structural explanation of why Twitter might be failing at dealing with abuse. Rather he seems to be saying Twitter just hasn’t yet found the right ‘volume setting’ to give to the voices of victims of abuse — i.e. to fix the problem of their voices being drowned out by online abuse.

Which would basically be the ‘treat bad speech with more speech’ argument that really only makes sense if you’re already speaking from a position of privilege and/or power.

When in fact the key point that Twitter needs to grasp is that hate speech itself suppresses free speech. And that victims of abuse shouldn’t have to spend their time and energy trying to shout down their abusers. Indeed, they just won’t. They’ve leave your platform because it’s turned into a hateful place.

In a response to Dorsey’s tweet storm, Twitter user Eric Markowitz also pointed out that by providing verification status to prominent nazis Twitter is effectively validating their hate speech — going on on to suggest the company could “fairly simply develop better criteria around verifying people who espouse hate and genocide”.

Dorsey responded that: “We’re reconsidering our verification policies. Not as high a priority as enforcement, but it’s up there.”

“Enforcing according to our rules comes first. Will get to it as soon as we can, but we have limited resources and need to strictly prioritize,” he added.

At this point — with phrases like “limited resources” being dropped — I’d say you shouldn’t get your hopes up of a root and branch reformation of Twitter’s policy towards purveyors of hate. It’s entirely possible the company is just going to end up offering yet another set of ineffective anti-troll tools.

Thing is, having invited the hate-filled voices in, and allowed so many trolls to feel privileged to speak out, Twitter is faced with a philosophical U-turn in extricating its product from the unpleasantness its platform has become synonymous with.

And really, given its terrible extant record on dealing with abuse, it’s not at all clear whether the current management team is capable of the paradigm shift in perspective needed to tackle hate speech. Or whether we’ll just get another fudge and fiddle focused on preserving a definition of free speech that has, for so long, allowed hateful tweets to flow over and drown out other speech.

As I wrote this week, Twitter’s abuse problem is absolutely a failure of leadership. And we’re still seeing only on-the-back-foot responses from the CEO when users point out long standing, structural problems with its approach.

This doesn’t bode well for Twitter being able to fix a crisis of its own flawed conviction.