All posts in “digital media”

Apple ad focuses on iPhone’s most marketable feature — privacy

Apple is airing a new ad spot in primetime today. Focused on privacy, the spot is visually cued, with no dialog and a simple tagline: Privacy. That’s iPhone.

In a series of humorous vignettes, the message is driven home that sometimes you just want a little privacy. The spot has only one line of text otherwise, and it’s in keeping with Apple’s messaging on privacy over the long and short term. “If privacy matters in your life, it should matter to the phone your life is on.”

The spot will air tonight in primetime in the U.S. and extend through March Madness. It will then air in select other countries.

[embedded content]

You’d have to be hiding under a rock not to have noticed Apple positioning privacy as a differentiating factor between itself and other companies. Beginning a few years ago, CEO Tim Cook began taking more and more public stances on what the company felt to be your “rights” to privacy on their platform and how that differed from other companies. The undercurrent being that Apple was able to take this stance because its first-party business relies on a relatively direct relationship with customers who purchase its hardware and, increasingly, its services.

This stands in contrast to the model of other tech giants like Google or Facebook that insert an interstitial layer of monetization strategy on top of that relationship in the forms of application of personal information about you (in somewhat anonymized fashion) to sell their platform to advertisers that in turn can sell to you better.

Turning the ethical high ground into a marketing strategy is not without its pitfalls, though, as Apple has discovered recently with a (now patched) high-profile FaceTime bug that allowed people to turn your phone into a listening device, Facebook’s manipulation of App Store permissions and the revelation that there was some long overdue house cleaning needed in its Enterprise Certificate program.

I did find it interesting that the iconography of the “Private Side” spot very, very closely associates the concepts of privacy and security. They are separate, but interrelated, obviously. This spot says these are one and the same. It’s hard to enforce privacy without security, of course, but in the mind of the public I think there is very little difference between the two.

The App Store itself, of course, still hosts apps from Google and Facebook among thousands of others that use personal data of yours in one form or another. Apple’s argument is that it protects the data you give to your phone aggressively by processing on the device, collecting minimal data, disconnecting that data from the user as much as possible and giving users as transparent a control interface as possible. All true. All far, far better efforts than the competition.

Still, there is room to run, I feel, when it comes to Apple adjudicating what should be considered a societal norm when it comes to the use of personal data on its platform. If it’s going to be the absolute arbiter of what flies on the world’s most profitable application marketplace, it might as well use that power to get a little more feisty with the bigcos (and littlecos) that make their living on our data.

I mention the issues Apple has had above not as a dig, though some might be inclined to view Apple integrating privacy with marketing as boldness bordering on hubris. I, personally, think there’s still a major difference between a company that has situational loss of privacy while having a systemic dedication to privacy and, well, most of the rest of the ecosystem which exists because they operate an “invasion of privacy as a service” business.

Basically, I think stating privacy is your mission is still supportable, even if you have bugs. But attempting to ignore that you host the data platforms that thrive on it is a tasty bit of prestidigitation.

But that might be a little too verbose as a tagline.

On the strength of its Mixer partnership, streaming toolkit developer Lightstream raises $8 million

Lightstream, a Chicago-based company that develops tools to augment live streams, has raised $8 million in new funding as it looks to add monitoring, management and monetization services to its suite of editing technologies.

Last year, the company inked a partnership with Microsoft‘s live-streaming Twitch competitor, Mixer, to let streamers on the platform add professional flourishes like images, overlays, transitions and text to streams or to edit streams, without a lot of professional editing tools or expertise.

“We got started when Twitch was the only game in town,” says Stu Grubbs, Lightstream’s co-founder and chief executive. “Twitch was the only big name back in 2014 when we started and to be a live streamer you needed to understand bit rates and codex. We set out to make that easier.”

The company works with Twitch, YouTube and Mixer, but it was when the partnership with Mixer came along that the company’s user base began to explode.

Key to the adoption was Microsoft’s adoption of Beam, which lowered the latency on Mixer’s video streams and made that product more compelling to users. Coupled with Microsoft’s reach as one of the most popular platforms for PC and console gamers, Lightstream’s toolkit gained a powerful, and large, user base.

For the past few years, the company has had between 1,000 and 2,000 streamers signing up every week to use its tools. There are now roughly 10,000 streamers on the platform, according to a rough estimate.

Now, with the new money, the company will look to double the size of the team and add some features that have been requested by Lightstream’s growing community of users, Grubbs said.

As a result of the new round, which included a $6 million equity commitment from investors including Drive Capital, MK Capital and Pritzker Group, and a $2 million debt facility from Silicon Valley Bank, Drive Capital general partner Andy Jenks will take a seat on the company’s board of directors.

“Lightstream is an incredible company that has seen tremendous growth because of smart and efficient practices. Stu and his team stand at the convergence of multiple massive and rapidly growing industries,” said Jenks, in a statement. “Stu has immense passion and a keen vision for what they can do for creators and the impact Lightstream can have in live streaming, gaming, and beyond. They have assembled an incredible team, made smart strategic moves, created massive partnerships and are building towards something so big that we had to be a part of it.”

YouTube under fire for recommending videos of kids with inappropriate comments

More than a year on from a child safety content moderation scandal on YouTube and it takes just a few clicks for the platform’s recommendation algorithms to redirect a search for “bikini haul” videos of adult women towards clips of scantily clad minors engaged in body contorting gymnastics or taking an icebath or ice lolly sucking “challenge”.

A YouTube creator called Matt Watson flagged the issue in a critical Reddit post, saying he found scores of videos of kids where YouTube users are trading inappropriate comments and timestamps below the fold, denouncing the company for failing to prevent what he describes as a “soft-core pedophilia ring” from operating in plain sight on its platform.

He has also posted a YouTube video demonstrating how the platform’s recommendation algorithm pushes users into what he dubs a pedophilia “wormhole”, accusing the company of facilitating and monetizing the sexual exploitation of children.

We were easily able to replicate the YouTube algorithm’s behavior that Watson describes in a history-cleared private browser session which, after clicking on two videos of adult women in bikinis, suggested we watch a video called “sweet sixteen pool party”.

Clicking on that led YouTube’s side-bar to serve up multiple videos of prepubescent girls in its ‘up next’ section where the algorithm tees-up related content to encourage users to keep clicking.

Videos we got recommended in this side-bar included thumbnails showing young girls demonstrating gymnastics poses, showing off their “morning routines”, or licking popsicles or ice lollies.

Watson said it was easy for him to find videos containing inappropriate/predatory comments, including sexually suggestive emoji and timestamps that appear intended to highlight, shortcut and share the most compromising positions and/or moments in the videos of the minors.

We also found multiple examples of timestamps and inappropriate comments on videos of children that YouTube’s algorithm recommended we watch.

Some comments by other YouTube users denounced those making sexually suggestive remarks about the children in the videos.

Back in November 2017 several major advertisers froze spending on YouTube’s platform after an investigation by the BBC and the Times discovered similarly obscene comments on videos of children.

Earlier the same month YouTube was also criticized over low quality content targeting kids as viewers on its platform.

The company went on to announce a number of policy changes related to kid-focused video, including saying it would aggressively police comments on videos of kids and that videos found to have inappropriate comments about the kids in them would have comments turned off altogether.

Some of the videos of young girls that YouTube recommended we watch had already had comments disabled — which suggests its AI had previously identified a large number of inappropriate comments being shared (on account of its policy of switching off comments on clips containing kids when comments are deemed “inappropriate”) — yet the videos themselves were still being suggested for viewing in a test search that originated with the phrase “bikini haul”.

Watson also says he found ads being displayed on some videos of kids containing inappropriate comments, and claims that he found links to child pornography being shared in YouTube comments too.

We were unable to verify those findings in our brief tests.

We asked YouTube why its algorithms skew towards recommending videos of minors, even when the viewer starts by watching videos of adult women, and why inappropriate comments remain a problem on videos of minors more than a year after the same issue was highlighted via investigative journalism.

The company sent us the following statement in response to our questions:

Any content — including comments — that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts. We continue to invest heavily in technology, teams and partnerships with charities to tackle this issue. We have strict policies that govern where we allow ads to appear and we enforce these policies vigorously. When we find content that is in violation of our policies, we immediately stop serving ads or remove it altogether.

A spokesman for YouTube also told us it’s reviewing its policies in light of what Watson has highlighted, adding that it’s in the process of reviewing the specific videos and comments featured in his video — specifying also that some content has been taken down as a result of the review.

Although the spokesman emphasized that the majority of the videos flagged by Watson are innocent recordings of children doing everyday things. (Though of course the problem is that innocent content is being repurposed and time-sliced for abusive gratification and exploitation.)

The spokesman added that YouTube works with the National Center for Missing and Exploited Children to report accounts found making inappropriate comments about kids to law enforcement.

In wider discussion about the issue the spokesman told us that determining context remains a challenge for its AI moderation systems.

On the human moderation front he said the platform now has around 10,000 human reviewers tasked with assessing content flagged for review.

The volume of video content uploaded to YouTube is around 400 hours per minute, he added.

There is still very clearly a massive asymmetry around content moderation on user generated content platforms, with AI poorly suited to plug the gap given ongoing weakness in understanding context, even as platforms’ human moderation teams remain hopelessly under-resourced and outgunned vs the scale of the task.

Another key point which YouTube failed to mention is the clear tension between advertising-based business models that monetize content based on viewer engagement (such as its own), and content safety issues that need to carefully consider the substance of the content and the context it’s been consumed in.

It’s certainly not the first time YouTube’s recommendation algorithms have been called out for negative impacts. In recent years the platform has been accused of automating radicalization by pushing viewers towards extremist and even terrorist content — which led YouTube to announce another policy change in 2017 related to how it handles content created by known extremists.

The wider societal impact of algorithmic suggestions that inflate conspiracy theories and/or promote bogus, anti-factual health or scientific content have also been repeatedly raised as a concern — including on YouTube.

And only last month YouTube said it would reduce recommendations of what it dubbed “borderline content” and content that “could misinform users in harmful ways”, citing examples such as videos promoting a fake miracle cure for a serious illness, or claiming the earth is flat, or making “blatantly false claims” about historic events such as the 9/11 terrorist attack in New York.

“While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community,” it wrote then. “As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”

YouTube said that change of algorithmic recommendations around conspiracy videos would be gradual, and only initially affect recommendations on a small set of videos in the US.

It also noted that implementing the tweak to its recommendation engine would involve both machine learning tech and human evaluators and experts helping to train the AI systems.

“Over time, as our systems become more accurate, we’ll roll this change out to more countries. It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube,” it added.

It remains to be seen whether YouTube will expand that policy shift and decide it must exercise greater responsibility in how its platform recommends and serves up videos of children for remote consumption in the future.

Political pressure may be one motivating force, with momentum building for regulation of online platforms — including calls for Internet companies to face clear legal liabilities and even a legal duty care towards users vis-a-vis the content they distribute and monetize.

For example UK regulators have made legislating on Internet and social media safety a policy priority — with the government due to publish a White Paper setting out its plans for ruling platforms this winter.

Manipulating an Indian politician’s tweets is worryingly easy to do

Here’s a concerning story from India, where the upcoming election is putting the use of social media in the spotlight.

While the Indian government is putting Facebook, Google and other companies under pressure to prevent their digital platforms from being used for election manipulation, a journalist has demonstrated just how easy it is to control the social media messages published by government ministers.

Pon Radhakrishnan, India’s minister of state for finance and shipping, published a series of puzzling tweets today after Pratik Sinha, a co-founder of fact-checking website Alt News, accessed a Google document of prepared statements and tinkered with the content.

Among the statements tweeted out, Radhakrishnan said Prime Minister Modi’s government had failed the middle classes and had not made development on improving the country’s general welfare. Sinha’s edits also led to the official BJP Assam Pradesh account proclaiming that the prime minister had destroyed all villages and made women slaves to cooking.

These are the opposite of the partisan messages that the accounts intended to send.

The messages were held in an unlocked Google document that contained a range of tweets compiled for the Twitter accounts. Sinha managed to access the document and doctor the messages into improbable statements — which he has done before — in order to show the shocking lack of security and processes behind the social media content.

Sinha said he made the edits “to demonstrate how dangerous this is from the security standpoint for this country.”

“I had fun but it could have disastrous consequences,” he told TechCrunch in a phone interview. “This is a massive security issue from the point of view of a democracy.”

Sinha said he was able to access the document — which was not restricted or locked to prevent changes — through a WhatsApp group that is run by members of the party. Declining to give specifics, he said he had managed to infiltrate the group and thus gain access to a flow of party and government information and, even more surprisingly, get right into the documents and edit them.

What’s equally as stunning is that, even with the message twisted 180 degrees, their content didn’t raise an alarm. The tweets were still loaded and published without any realization. It was only after Sinha went public with the results that Radhakrishnan and BJP Assam Pradesh account begin to delete them.

The Indian government is rightly grilling Facebook and Google to prevent its platform being abused around the election, as evidence suggested happened in the U.S. presidential election and the U.K.’s Brexit vote, but members of the government themselves should reflect on the security of their own systems, too. It would be too easy for these poor systems to be exploited.

2018 really was more of a dumpster fire for online hate and harassment, ADL study finds

Around 37 percent of Americans were subjected to severe hate and harassment online in 2018, according to a new study by the Anti-Defamation League, up from about 18 percent in 2017. And more than half of all Americans experienced some form of harassment, according to the ADL study.

Facebook users bore the brunt of online harassment on social networking sites according to the ADL study, with around 56 percent of survey respondents indicating that at least some of their harassment occurred on the platform — unsurprising, given Facebook’s status as the dominant social media platform in the U.S.

Around 19 percent of people said they experienced severe harassment on Twitter (only 19 percent? That seems low), while 17 percent reported harassment on YouTube, 16 percent on Instagram and 13 percent on WhatsApp .

Chart courtesy of the Anti-Defamation League

In all, the blue-ribbon standards for odiousness went to Twitch, Reddit, Facebook and Discord, when the ADL confined their surveys to daily active users. nearly half of all daily users on Twitch have experienced harassment, the report indicated. Around 38 percent of Reddit users, 37 percent of daily Facebook users and 36 percent of daily Discord users reported being harassed.

“It’s deeply disturbing to see how prevalent online hate is, and how it affects so many Americans,” said ADL chief executive Jonathan A. Greenblatt. “Cyberhate is not limited to what’s solely behind a screen; it can have grave effects on the quality of everyday lives — both online and offline. People are experiencing hate and harassment online every day and some are even changing their habits to avoid contact with their harassers.”

And the survey respondents seem to think that online hate makes people more susceptible to committing hate crimes, according to the ADL.

The ADL also found that most Americans want policymakers to strengthen laws and improve resources for police around cyberbullying and cyberhate. Roughly 80 percent said they wanted to see more action from lawmakers.

Even more Americans, or around 84 percent, think that the technology platforms themselves need to do more work to curb the harassment, hate and hazing they see on social applications and websites.

As for the populations that were most at risk to harassment and hate online, members of the LGBTQ community were targeted most frequently, according to the study. Some 63 percent of people identifying as LGBTQ+ said they were targeted for online harassment because of their identity.

“More must be done in our society to lessen the prevalence of cyberhate,” said Greenblatt. “There are key actions every sector can take to help ensure more Americans are not subjected to this kind of behavior. The only way we can combat online hate is by working together, and that’s what ADL is dedicated to doing every day.”

The report also revealed that cyberbullying had real consequences on user behavior. Of the survey respondents, 38 percent stopped, reduced or changed online activities, and 15 percent took steps to reduce risks to their physical safety.

Interviews for the survey were conducted between December 17 to December 27, 2018 by the public opinion and data analysis company YouGov, and was conducted by the ADL’s Center for Technology and Society. The nonprofit admitted that it oversampled for respondents who identified as Jewish, Muslim, African American, Asian American or LGBTQ+ to “understand the experiences of individuals who may be especially targeted because of their group identity.”

The survey had a margin of error of plus or minus three percentage points, according to a statement from the ADL.