All posts in “social media”

YouTube is closing its private messages feature…and many kids are outraged

People love to share YouTube videos among their friends, which is why in mid-2017 YouTube launched a new in-app messaging feature that would allow YouTube users to private send their friends videos and chat within a dedicated tab in the YouTube mobile app. That feature is now being shut down, the company says. After September 18, the ability to direct message friends on YouTube itself will be removed.

The change was first spotted by 9to5Google, which noted that YouTube Messages came to the web in May of last year.

YouTube, in its announcement about the closure, doesn’t offer much insight into its decision.

While the company says that its more recent work has been focused on public conversations with updates to comments, posts, and Stories, it doesn’t explain why Messages is no longer a priority.

A likely reason, of course, is that the feature was under-utilized. Most people today are heavily invested in their own preferred messaging apps — whether that’s Messenger, WhatsApp, WeChat, iMessage or others.

Google, meanwhile, can’t seem to stop itself from building messaging apps and experiences. When YouTube Messages launched, Google was also invested in Allo (RIP), Duo, Hangouts, Meet, Google Voice, Android Messages/RCS, and was poised to transition users from Gchat (aka Google Talk) in Gmail to Hangouts Chat.

However, based on the nearly 500 angry comments replying to Google’s post about the closure, it seems that YouTube Messages may have been preferred by young users.

Younger…as in children.

Screen Shot 2019 08 21 at 9.39.38 AMScreen Shot 2019 08 21 at 9.39.23 AM

A sizable number of commenters are complaining that YouTube was the “only place” they could message their friends because they didn’t have a phone or weren’t allowed to give out their phone number.

Some said they used the feature to “talk to their mom” or because they weren’t allowed to use social media.

Screen Shot 2019 08 21 at 10.02.56 AMScreen Shot 2019 08 21 at 9.41.12 AM

It appears that many children had been using YouTube Messages as a sort of workaround to their parents’ block on messaging apps on their own phones, or as a way to communicate from their tablets or via web, likely without parents’ knowledge.

That’s not a good look for YouTube at this time, given its issues around inappropriate videos aimed at children, child exploitation, child predators, and regulatory issues.

The video platform in February came under fire for putting kids at risk of child predators. The company had to shut off comments on videos featuring minors, after the discovery of a pedophile ring that had been communicating via YouTube’s comments section.

Notably, the FTC is also now following up on complaints about YouTube’s possible violations of COPPA, a U.S. Children’s Privacy law. Child advocacy and consumers groups complain that YouTube has lured children under 13 into its digital playground, where it then collects their data and targets them with ads, without parental consent.

Though some people may have used YouTube Messages to promote their channel or to share videos with family members and friends, it’s clear this usage hadn’t gone mainstream. Otherwise, YouTube wouldn’t be walking away from a popular product.

The feature also had issues with spam — much like Google+ did —  as there were unwelcome requests from strangers, at times.

YouTube says users will still be able to share videos through the “Share” feature which connects to other social networks.

Even a Republican study can’t confirm anti-conservative bias on Facebook

Is Facebook biased against conservatives? An independent review led by former Sen. Jon Kyl set out to answer that question last year. 

Now, the results are in. The answer? Inconclusive. But the methodology behind the “audit” is highly dubious.

On Tuesday, the long-awaited report was released, along with a Wall Street Journal op-ed by the former Arizona GOP senator.

“Facebook has recognized the importance of our assessment and has taken some steps to address the concerns we uncovered,” Kyl writes in the report. “But there is still significant work to be done to satisfy the concerns we heard from conservatives.”

The audit was voluntarily arranged by Facebook. According to Kyl, his team at Covington & Burling was given complete independence in conducting the review and reaching their conclusion.

However, the report doesn’t present any data at all. The methodology appears to be that Kyl and his team simply interviewed 133 conservative individuals and organizations and summarized their opinions. None of them are named. 

“In order to encourage the most candid responses possible in our interviews, we agreed to keep the names confidential, and I believe that policy helped a lot,” Kyl told Mashable in an email. “What I can tell you is that almost every prominent  conservative organization and many individuals with experience using Facebook were interviewed, and, based on the results, I believe we got a good representative sample of conservative opinion.”

The report doesn’t present any data at all

Conservatives in the report expressed concern over everything from Facebook’s algorithm change — which they accuse of preferring liberal news outlets — to the political beliefs of Facebook employees.

One particularly interesting section contends that conservatives are upset with Facebook for having hate speech policies at all. 

Hate speech, specifically relating to white nationalism, has long been a problem on Facebook.  Earlier this year, a white supremacist was able to successfully livestream his mass shooting, which left 51 people dead at two Christchurch, New Zealand mosques. Internationally, the UN has even linked hate speech on the platform to the genocide of Rohingya Muslims in Myanmar. 

“Interviewees’ concerns stemmed both from the notion of having a ‘hate speech’ policy in the first place and from unfair labeling of certain speech as ‘hate speech,'” says the report. “Interviewees often pointed out the highly subjective nature of determining what constitutes ‘hate’—an assessment that may be subject to the biases of content reviewers.”

Kyl’s report, as well as a post by Facebook VP of Global Affairs and Communications Nick Clegg, detailed the company’s response to these findings. In most cases, Facebook had already addressed the concerns in the report with earlier policy changes. 

One specific policy affected by the audit is that Facebook will reverse a previous ad policy and now allow images of medical tubes connected to the human body in Facebook ads. The report explains that anti-abortion groups were having their advertisements rejected due to this rule.

Last year, Facebook agreed to two separate legal audits in order to look into mounting allegations of bias against conservatives and minority groups. The latter audit was led by Laura Murphy, a civil rights leader formerly at the ACLU. 

The civil rights audit looked into issues ranging from voter suppression caused by misinformation to white supremacy and hate content on the social media site. This audit has since produced two reports — one in December of last year detailing recommendations for Facebook, and a progress report this June. The third and final report is scheduled to be released in the first half of 2020.

It appears that the conservative bias audit is behind schedule, as the first report has taken more than a year to be issued. Kyl’s appointment to the U.S. Senate as John McCain’s replacement in 2018 was very likely a contributing factor.

But, in the end, the final results of Kyl’s audit might not matter. It’s clear that, for many bad actors in the conservative movement, the allegation of bias is a political weapon they’re more than willing to wield — regardless of the facts.

Cms%252f2019%252f8%252fa26d5139 d583 fbbd%252fthumb%252f00001.jpg%252foriginal.jpg?signature=rn wtn69lc26deyenug45bgg5k4=&source=https%3a%2f%2fvdist.aws.mashable

Twitter to test a new filter for spam and abuse in the Direct Message inbox

Twitter is testing a new way to filter unwanted messages from your Direct Message inbox. Today, Twitter allows users to set their Direct Message inbox as being open to receiving messages from anyone, but this can invite a lot of unwanted messages, including abuse. While one solution is to adjust your settings so only those you follow can send your private messages, that doesn’t work for everyone. Some people — like reporters, for example — want to have an open inbox in order to have private conversations and receive tips.

This new experiment will test a filter that will move unwanted messages, including those with offensive content or spam, to a separate tab.

Instead of lumping all your messages into a single view, the Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

Users would have to click on the “Show” button to even read these, which protects them from having to face the stream of unwanted content that can pour in at times when the inbox is left open.

And even upon viewing this list of filtered messages, all the content itself isn’t immediately visible.

In the case that Twitter identifies content that’s potentially offensive, the message preview will say the message is hidden because it may contain offensive content. That way, users can decide if they want to open the message itself or just click the delete button to trash it.

The change could allow Direct Messages to become a more useful tool for those who prefer an open inbox, as well as an additional means of clamping down on online abuse.

It’s also similar to how Facebook Messenger handles requests — those from people you aren’t friends with are relocated to a separate Message Requests area. And those that are spammy or more questionable are in a hard-to-find Filtered section below that.

It’s not clear why a feature like this really requires a “test,” however — arguably, most people would want junk and abuse filtered out. And those who for some reason did not, could just toggle a setting to turn the filter off.

Instead, this feels like another example of Twitter’s slow pace when it comes to making changes to clamp down on abuse. Facebook Messenger has been filtering messages in this way since late 2017. Twitter should just launch a change like this, instead of “testing” it.

The idea of hiding — instead of entirely deleting — unwanted content is something Twitter has been testing in other areas, too. Last month, for example, it began piloting a new “Hide Replies” feature in Canada, which allows users to hide unwanted replies to their tweets so they’re not visible to everyone. The tweets aren’t deleted, but rather placed behind an extra click — similar to this Direct Message change.

Twitter is updating is Direct Message system in other ways, too.

At a press conference this week, Twitter announced several changes coming to its platform including a way to follow topics, plus a search tool for the Direct Message inbox, as well as support for iOS Live Photos as GIFs, the ability to reorder photos, and more.

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

Instagram says growth hackers are behind spate of fake Stories views

If you use Instagram and have noticed a bunch of strangers watching your Stories in recent months — accounts that don’t follow you and seem to be Russian — well, you’re not alone.

Nor are you being primed for a Russian disinformation campaign. At least, probably not. But you’re right to smell a fake.

TechCrunch’s very own director of events, Leslie Hitchcock, flagged the issue to us — complaining of “eerie” views on her Instagram Stories in the last couple of months from random Russian accounts, some seemingly genuine (such as artists with several thousand followers) and others simply “weird” looking.

A thread on Reddit also poses the existential question: “Why do Russian Models (that don’t follow me) keep watching my Instagram stories?” (The answer to which is: Not for the reason you hope.)

Instagram told us it is aware of the issue and is working on a fix.

It also said this inauthentic activity is not related to misinformation campaigns but is rather a new growth hacking tactic — which involves accounts paying third parties to try to boost their profile via the medium of fake likes, followers and comments (in this case by generating inauthentic activity by watching the Instagram Stories of people they have no real interest in in the hopes that’ll help them pass off as real and net them more followers).

Eerie is spot on. Some of these growth hackers probably have banks of phones set up where Instagram Stories are ‘watched’ without being watched. (Which obviously isn’t going to please any advertisers paying to inject ads into Stories… )

A UK social media agency called Hydrogen also noticed the issue back in June — blogging then that: “Mass viewing of Instagram Stories is the new buying followers of 2019”, i.e. as a consequence of the Facebook-owned social network cracking down on bots and paid-for followers on the platform.

So, tl;dr, squashing fakes is a perpetual game of whack-a-mole. Let’s call it Zuckerberg’s bane.

“Our research has found that several small social media agencies are using this as a technique to seem like they are interacting with the public,” Hydrogen also wrote, before going on to offer sage advice that: “This is not a good way to build a community, and we believe that Instagram will begin cracking down on this soon.”

Instagram confirmed to us it is attempting to crack down — saying it’s working to try to get rid of this latest eyeball-faking flavor of inauthentic activity. (We paraphrase.)

It also said that, in the coming months, it will introduce new measures to reduce such activity — specifically from Stories — but without saying exactly what these will be.

We also asked about the Russian element but Instagram was unable to provide any intelligence on why a big proportion of the fake Stories views seem to be coming from Russia (without any love). So that remains a bit of a mystery.

What can you do right now to prevent your Instagram Stories from being repurposed as a virtue-less signalling machine for sucking up naive eyeballs?

Switching your profile to private is the only way to thwart the growth hackers, for now.

Albeit, that means you’re limiting who you can reach on the Instagram platform as well as who can reach you.

When we suggested to Hitchcock she switch her account to private she responded with a shrug, saying: “I like to engage with brands.”