All posts in “social media”

UN says Facebook is accelerating ethnic violence in Myanmar

The United Nations has warned that Facebook’s platform is contributing to the spread of hate speech and ethnic violence in crisis hit Myanmar.

It’s yet another black mark against social media at a time when the tech industry’s reputation as an accelerator of false information is attracting criticism from the highest places.

This week the government of Sri Lanka also sought to block access to Facebook and two other of its social services, WhatsApp and Instagram, in an attempt to stem mob violence against its local Muslim minority — citing inflammatory social media posts.

“These platforms are banned because they were spreading hate speeches and amplifying them,” a government spokesman told the New York Times.

India has also struggled for years with false information being spread by social media platforms like WhatsApp then triggering riots, communal violence and even leading to deaths.

While humans telling lies is nothing new, the speed at which misinformation and disinformation can now spread, thanks to digitally networked communities linked on social media, is.

Moderating that risk is the challenge big tech platforms stand accusing of failing.

UN human rights experts investigating a possible genocide in Rakhine state warned yesterday that Facebook’s platform is being used by ultra-nationalist Buddhists to incite violence and hatred against the Rohingya and other ethnic minorities.

A security crackdown in the country last summer led to around 650,000 Rohingya Muslims fleeing into neighboring Bangladesh. Since then there have been multiple reports of state-led violence against the refugees, and the UN has been leading a fact-finding mission in the country.

Yesterday, chairman of the mission, Marzuki Darusman, told reporters that the social media platform had played a “determining role” in Myanmar’s crisis (via Reuters).

Darusman said Facebook has “substantively contributed to the level of acrimony and dissension and conflict” within the public sphere. “Hate speech is certainly of course a part of that,” he continued, adding: “As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media.”

In Myanmar, Ashin Wirathu, an ultranationalist Buddhist monk who preaches hate against the Rohingya, has been able to build up large followings on social media — using Facebook to spread divisive and hate-fueling messages.

Speaking to reporters yesterday, UN investigator Yanghee Lee, described Facebook as a huge part of public, civil and private life in Myanmar, noting it is used by the government to disseminate information to the public.

However she also flagged how the platform has been appropriated by ultra-nationalist elements to spread hate against minorities.

In the case of Wirathu, Facebook has sometimes removed or restricted his pages — but does not appear to have done enough.

“Everything is done through Facebook in Myanmar,” said Lee. “It was used to convey public messages but we know that the ultra-nationalist Buddhists have their own Facebooks and are really inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities.”

“I’m afraid that Facebook has now turned into a beast, and not what it originally intended,” she added.

We reached out to the company with questions but at the time of writing Facebook had not responded.

For years Myanmar’s military dictatorship entirely controlled and censored the press but in 2011 it began what was billed as a gradual democratic transition — which included opening up to new media services such as Facebook. And the platform essentially went from ground zero to becoming the most important information source in Myanmar in a handful of years.

Local Facebook users are now thought to number over 30 million.

But as uptake ballooned, human rights groups sounded alarms over how Facebook is being used to spread hate speech and stoke ethnic violence.

Last year New York Times reporter, Paul Moyer, also warned that government Facebook channel were being used to spread anti-Rohingya propaganda — implying the platform has also been appropriated as a citizen control tool by the state seeding its own propaganda.

And while states maliciously misappropriating social media to foster hate against their own citizens may not be a problem in every country where the tech industry operates, social media platforms amplifying hate speech is certainly a universal concern — from Asia, to Europe, to America.

Featured Image: Nur Photo/Getty Images

UN officials blast Facebook over spread of Rohingya hate speech

Facebook has long been criticised for its role in the Rohingya crisis, an assessment now underscored by comments by United Nations investigators.

Marzuki Darusman, chairman of the UN Independent International Fact-Finding Mission in Myanmar told reporters that social media had a “determining role” in spreading hate speech in the country, according to Reuters.

“It has … substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public. Hate speech is certainly of course a part of that. As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media,” Darusman said.

Escalating violence has forced more than 650,000 Rohingya Muslims to flee across the border to Bangladesh, in what the UN’s human rights chief has described as a “a textbook example of ethnic cleansing.”

Facebook is a major news source for people in Myanmar, where it has been used as a platform to stir up public outrage against the Rohingya.

“It was used to convey public messages but we know that the ultra-nationalist Buddhists have their own Facebooks and are really inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities,” said UN Myanmar investigator Yanghee Lee, reported by Reuters

“I’m afraid that Facebook has now turned into a beast, and not what it originally intended.”

“I’m afraid that Facebook has now turned into a beast, and not what it originally intended.”

A Facebook spokesperson told Mashable it has “clear rules” against hate speech and the incitement of violence, and that the company works hard to keep it off the platform.

“We work with local communities and NGOs to increase awareness of our policies and reporting process, and are always looking for ways to improve people’s experience on Facebook,” the spokesperson said. 

“In Myanmar, we introduced localized, translated versions of our Community Standards and have a dedicated safety Page, which we work with our partners to promote. We also created Panzagar stickers to help promote positive speech online. 

“Learning from experts on-the-ground, we will continue to refine the way in which we implement and promote awareness of our policies to ensure that our community is safe, especially people who may be vulnerable or under attack.”

[embedded content]

Platform power is crushing the web, warns Berners-Lee

On the 29th birthday of the world wide web, its inventor, Sir Tim Berners-Lee, has sounded a fresh warning about threats to the web as a force for good, adding his voice to growing concerns about big tech’s impact on competition and society.

The web’s creator argues that the “powerful weight of a few dominant” tech platforms is having a deleterious impact by concentrating power in the hands of gatekeepers that gain “control over which ideas and opinions are seen and shared”.

His suggested fix is socially minded regulation, so he’s also lending his clout to calls for big tech to be ruled.

“These dominant platforms are able to lock in their position by creating barriers for competitors,” Berners-Lee writes in an open letter published today on the Web Foundation’s website. “They acquire startup challengers, buy up new innovations and hire the industry’s top talent. Add to this the competitive advantage that their user data gives them and we can expect the next 20 years to be far less innovative than the last.”

The concentration of power in the hands of a few mega platforms is also the source of the current fake news crisis, in Berners-Lee’s view, because he says platform power has made it possible for people to “weaponise the web at scale” — echoing comments made by the UK prime minister last year when she called out Russia for planting fakes online to try to disrupt elections.

“In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data,” he writes, pointing out that the current response of lawmakers has been to look “to the platforms themselves for answers” — which he argues is neither fair nor likely to be effective.

In the EU, for example, the threat of future regulation is being used to encourage social media companies to sign up to a voluntary code of conduct aimed at speeding up takedowns of various types of illegal content, including terrorist propaganda. Though the Commission is also seeking to drive action against a much broader set of online content issues — such as hate speech, commercial scams and even copyrighted material.

Critics argue its approach risks chilling free expression via AI-powered censorship.

Some EU member states have gone further too. Germany now has a law with big fines for social media platforms that fail to comply with hate speech takedown requirements, for example, while in the UK ministers are toying with new rules, such as placing limits on screen time for children and teens.

Both the Commission and some EU member states have been pushing for increased automation of content moderation online. In the UK last month, ministers unveiled an extremism blocking tool which the government had paid a local AI company to develop, with the Home Secretary warning she had not ruled out forcing companies to use it.

Meanwhile, in the US, Facebook has faced huge pressure in recent years as awareness has grown of how extensively its platform is used to spread false information, including during the 2016 presidential election.

The company has announced a series of measures aimed at combating the spread of fake news generally, and reducing the risk of election disinformation specifically — as well as a major recent change to its news feed algorithm ostensibly to encourage users towards having more positive interactions on its platform.

But Berners-Lee argues that letting commercial entities pull levers to try to fix such a wide-ranging problem is a bad idea — arguing that any fixes companies come up with will inexorably be restrained by their profit-maximizing context and also that they amount to another unilateral impact on users.

A better solution, in his view, is not to let tech platform giants self-regulate but to create a framework for ruling them that factors in “social objectives”.

A year ago Berners-Lee also warned about the same core threats to the web. Though he was less coherent in his thinking then that regulation could be the solution — instead flagging up a variety of initiatives aimed at trying to combat threats such as the systematic background harvesting of personal data. So he seems to be shifting towards the need for a move overarching framework to control the tech that’s being used to control us.

“Companies are aware of the problems and are making efforts to fix them — with each change they make affecting millions of people,” he writes now. “The responsibility — and sometimes burden — of making these decisions falls on companies that have been built to maximise profit more than to maximise social good. A legal or regulatory framework that accounts for social objectives may help ease those tensions.”

Berners-Lee’s letter also emphasizes the need for diversity of thought in shaping any web regulations to ensure rules don’t get skewed towards a certain interest or group. And he makes a strong call for investments to help close the global digital divide.

“The future of the web isn’t just about those of us who are online today, but also those yet to connect,” he warns. “Today’s powerful digital economy calls for strong standards that balance the interests of both companies and online citizens. This means thinking about how we align the incentives of the tech sector with those of users and society at large, and consulting a diverse cross-section of society in the process.”

Another specific call he makes is for fresh thinking about Internet business models, arguing that online advertising should not be accepted as the only possible route for sustaining web platforms. “We need to be a little more creative,” he argues.

“While the problems facing the web are complex and large, I think we should see them as bugs: problems with existing code and software systems that have been created by people — and can be fixed by people. Create a new set of incentives and changes in the code will follow. We can design a web that creates a constructive and supportive environment,” he adds.

“Today, I want to challenge us all to have greater ambitions for the web. I want the web to reflect our hopes and fulfil our dreams, rather than magnify our fears and deepen our divisions.”

At the time of writing Amazon, Facebook, Google and Twitter had not responded to a request for comment.

Featured Image: Southbank Centre/Flickr UNDER A CC BY 2.0 LICENSE

Twitter reportedly suspended users that steal memes and force viral tweets

Image: roy scott/Getty Images/Ikon Images

Friday went poorly for a select group of Twitter users that have earned a reputation for their expertise  at gaming the system.

The social media company moved to suspend a number of popular accounts with millions of followers between them, Buzzfeed reports. Their offense? Stealing people’s tweets without credit and conspiring as a group to share tweets — their own, and those of paying customers — with the intent of forcing them to go viral.

Many of the suspended accounts — a list that includes @Dory, @GirlPosts, @SoDamnTrue, @reiatabie, @commonwhitegiri, @teenagernotes, @finah, @holyfag, and @memeprovider — are known as “tweetdeckers.” These users are so named because they gather in private Tweetdeck groups to plot out their plans to manufacture virality (a practice that Buzzfeed has documented extensively).

This sort of behavior goes against Twitter’s rules, which clearly state: “You may not use Twitter’s services for the purpose of spamming anyone.” The platform’s spam policy covers many different types of bad behavior, including the posting of “duplicative or substantially similar content, replies, or mentions over multiple accounts” or “[attempting] to artificially inflate account interactions.”

Tweetdeckers engage in both of those activities to make a post go viral, and some accept payment to perform the task for third-party interests — another Twitter no-no.

Recently, the company has purged bots (though there are reasons it may not go further), tweaked rules, and banned face-swap videos, many of which fall under the category of pornography.

There are still plenty of problem areas to be addressed on Twitter, but Friday’s move to suspend known tweetdeckers is just one more action in a recent string of them. It’s all part of the company’s ongoing struggle to clean up its platform, a process that has also come to include looking for outside assistance.

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f85053%2f83eeb35e a0ca 405a b227 689d637db2f4

Some hard truths about Twitter’s health crisis

It’s a testament to quite how control freaky and hermetically sealed to criticism the tech industry is that Twitter’s CEO Jack Dorsey went unscripted in front of his own brand livestreaming service this week, inviting users to lob awkward questions at him for the first time ever.

It’s also a testament to how much trouble social media is in. As I’ve written before, ‘fake news’ is an existential crisis for platforms whose business model requires them to fence vast quantities of unverified content uploaded by, at best, poorly verified users.

No content, no dice, as it were. But things get a whole lot more complicated when you have to consider what the content actually is; who wrote it; whether it’s genuine or not; and what its messaging might be doing to your users, to others and to society at large.

As a major MIT study looking at a decade’s worth of tweets — and also published this week — underlines: Information does not spread equally.

More specifically, fact-checked information that has been rated true seems to be less sharable than fact-checked information that has been rated false. Or to put it more plainly: Novel/outrageous content is more viral.

This is entirely unsurprising. As Jonathan Swift put it all the way back in the 1700s: “Falsehood flies, and the Truth comes limping after it.” New research, old truth.

What’s also true is that as social media’s major platforms have scaled, so too have the problems blasted through their megaphones zoomed into mainstream view.

Concerns have ballooned. We’re now at a structural level, debating societal fundamentals like cohesion, civility, democracy. Even, you could argue, confronting humanity itself. Platform as a term has always had a dehumanizing ring. Perhaps that’s their underlying truth too.

Dorsey says the “health” of conversations on his platform is now the company’s “number one priority” — more than a decade after he typed that vapid first tweet, “just setting up my twttr”, when he presumably had zero idea of all the horrible things humans would end up using his technology for.

But it’s also at least half a decade after warnings that trolls and bots were running rampant on Twitter’s platform.

Turns out the future comes at you eventually. Even if you stubbornly refuse to listen as alarm after alarm are being sounded. “Never send to know for whom the bell tolls; it tolls for thee,” wrote John Donne, meditating on society and the individual, back in 1624.

A #280 assessment of what a buzzcut, bearded and careworn Dorsey now says he sees as Twitter’s main problem and thus priority boils down to something like this…

We know our platform is being used negatively, people are hurting and public conversation is being damaged. But we don’t know how to fix it because we don’t understand how to measure the individual and societal impacts of our technology. We think more tech can help. Pls help us.

What Twitter’s crisis tells us is that tech companies are terrible listeners. Although those of us outside the engineering room knew that already.

It’s hardly a surprise that techies suck at listening when they sit inside their hermetically sealed pods thinking it’s both their special gift and libertarian right to control levers that remotely affect other people’s lives while channelling the spice and dollars their way.

So it is a good sign, albeit horribly overdue, to see a nervous and contrite-seeming Dorsey stand in front of the firehose of user opinion — for 50 or so raw, unedited minutes.

Hopefully this performance — which he said would be repeated regularly, from here on in — signals an absolute conversion to reformation. A realization that social media platforms can’t engineer around societal responsibility. That listening and understanding is absolutely their day job.

Head-in-the-sand-ism will catch up with you eventually. Just as playing fast and loose finally overtook Uber’s founder and landed his company in all sorts of legal hot water.

So how did Dorsey and select members of his safety ‘A-team’ do in their first ‘awkward questions’ Periscope?

Fair to middling, is my assessment. It’s clear they still don’t really know how to fix the mess they are in. Hence Twitter soliciting proposals from the public. But admitting they don’t know what to do and reaching out for help is a big and important step.

To put it colloquially, they’ve realized the shit they’re in. And the shit that’s at stake. Hashtag #changeforreal

Dorsey seemed visibly uncomfortable with the Periscope process, which again is testament to how closed a box and operating shop Twitter has been. He hasn’t always been CEO but he is a founder so he’s absolutely on the hook for that.

And Twitter’s bunker mentality has clearly compounded its problems in identifying and responding to content issues that first flared on its platform and then raged. Unpicking that won’t be easy.

Indeed, he said several times that the changes he wants to happen “won’t happen overnight”. That changing Twitter will require a lot of work.

He also admitted the company has “a lot of historical divisions” and said it has not always been as collaborative as it could have. tl;dr inside Twitter there’s a bunch of other bunkers — which truly sounds like a culture nightmare.

So when he talked about the hard work coming I don’t think Dorsey just meant reengineering lots of systems and cranking out lots more user surveys. Because changing an ingrained culture and its processes is a beast. Which is why it’s much better to start from a place of enlightenment. But hey, silver lining, here Twitter finally, finally is, admitting it screwed up and wanting to start over.

At least it’s now saying it wants its product to have a holistic and healthy impact on the world. That it wants to try and reset the coarsening of public discourse that social media has wrought. Certainly it’s a more evolved mission statement than its previous one — which was basically: ‘Eat our free speech.’

That said, Dorsey’s focus on a new type of measurement — this idea of a ‘health metric’ — as the solution for toxic content seems to me problematic. Almost, you could say, like the trigger response of an engineer confronting an ethics textbook for the first time.

Because Twitter’s content problems really boil down to Twitter failing to enforce the community standards it already has. Which in turn is a failure of leadership, as I have previously argued.

A good current example is that it has an ads policy that bans “misleading and deceptive” ads. Yet it continues to accept advertising money from unregulated entities pushing dubiously obscure crypto exchanges and flogging wildly risky token sales.

Twitter really doesn’t need to wait for a new metric to understand that the right thing to do here is to take crypto/ICO ads off its platform right now.

Shucks, even Facebook has done this.

Yet Dorsey and his team omitted to mention ads when he was asked about crypto scams during the Periscope. They just talked about what they’re doing to tackle Twitter users trying to tweet-scam others into sending a bit of crypto.

Continuing to accept ad money attached to what’s still an essentially unregulated space, when there are so many visible and public concerns because scams really are part of the furniture, really is indefensible. Banning these ads is both common sense and just the right thing to do.

And so if Twitter needs to wait for someone else to invent some kind of holistic wellness metric in order to make that low-hanging Satoshi drop then, well, its culture change is going to be much harder and much more painful than Dorsey imagines.

Obsession with measurement and the search for a universal problem-solving metric — to try to quantify the “health, openness and civility of public conversation”, as Twitter puts it — also looks very much like a strategy to buy time.

It may ultimately turn out to be misdirection too; an attempt to deflect blame and divert criticism via solutioneering.

By outsourcing a challenge, and seeking to co-opt the energy and ideas of third parties, Twitter is also reframing what’s broken in a way that starts to spread responsibility for the problems its platform is causing. (Maybe it’s taken a leaf out of Facebook’s playbook on that.)

Content moderation is certainly a hard problem if you understaff it. But if you employ enough machine-aided humans to properly enforce your community standards then it’s quite possible to shrink a toxic content problem.

Throw enough resources in and content problems can become vanishingly small, even insignificant. This is known as community management.

Yes there are counter risks. Especially if, like Twitter, you’ve historically advertised yourself as the free speech wing of the free speech party.

But if you’re having trouble drawing service red lines around, for example, known neo nazis, for whom hate speech and agitating for violence is a way of life, then setting out on a long and winding quest to deconstruct the anatomy of society in the hopes of eventually being able to build algorithms that do a better job of keeping toxic content off your platform, well, that probably isn’t the fundamental fix you should be searching for.

The problem right now is that Twitter doesn’t have the courage — or, heck, the imagination — to enforce its own community guidelines.

Though the hard truth may well be that it just cannot afford to. That the business model never did stack up. Not if you have to factor in the cost of staffing up to properly moderate all the shit that’s being uploaded and thrown about.

Meanwhile the costs of toxic, hate inciting messages blitzkrieging public conversation via the amplifying megaphone of social media keep on rising…

In his Periscope plea for help, Dorsey also said he wants Twitter to be “one of the most trusted services in the world”. But if he thinks he can build a for-all-technotopia where liberals co-exist peacefully alongside neo nazis — thanks to a shiny new set of augmented reality controls that fade view from counter view — he’s still thinking fatally inside the tech industry black box.

Social media has always bled offline. Its wounds, like its users, are human. Its shaping impacts are felt by people and across society.

Another old truth: You can’t please all of the people, all of the time. So if Dorsey thinks he can find a technology fix for that age-old challenge he’s going to waste a whole lot more money and a whole lot more time — while the rest of us bleed.

Featured Image: TechCrunch/Bryce Durbin