All posts in “Youtube”

YouTube starts delivering ‘breaking news’ on its homepage across platforms


YouTube has started rolling out a “Breaking News” section in people’s feeds today across platforms as Alphabet continues to tailor custom content playlists to users logged into Google Accounts, Android Police reports.

For most, YouTube is a place to hop from one video to the next and descend down rabbit holes, but browsing anything like a feed has become less straightforward than other platforms, which makes the breaking news section an interesting addition.

As the video sharing site has grown older, the content has grown more produced with YouTube personalities mounting “celebrity” careers, while commentary-heavy videos grow in popularity over the raw video that is more common on Facebook and Twitter.

For YouTube’s part this has grown to be a very valuable distinction. While Facebook’s has seen its video views increase heavily by way of quick-and-dirty videos, YouTube seems to be somewhere where people invest major time browsing, even if there seems to be just as much noise. In June, YouTube CEO Susan Wojcicki announced that the site had 1.5 billion watching an hour of video each on mobile alone.

Breaking news may initially seem to be a bit of an unusual direction for the site, but as Google works on courting publishers through special programs with AMP, it may be interesting to see how YouTube treats partnerships with 24-hour broadcast news publishers who already put a lot of content on the site.

Today, my “breaking news” tab had a lot of news on Bannon’s exit from the White House. Most of the stuff was an hour or two old, so it’s not BREAKING quite as much as other locations but it offers a chance to see stories evolve on YouTube in a much more packaged way. For now  it seems that YouTube is mostly turning to traditional networks as the sources. Going with established media might be a more straight-forward start but it will be interesting to see how much it adjusts to people’s viewing habits and whether more bombastic YouTube personalities find their way into the tab sharing their thoughts on the day’s latest news and begging you to like and subscribe for more.

Tech is not winning the battle against white supremacy

Content warning: This post contains racial slurs, homophobic language and very graphic depictions of racism and violence.

If you were just paying attention to press releases this week it’d be easy to believe that tech companies are winning the war on hate. Responding to the violence in Charlottesville, Mark Zuckerberg solemnly reflected that there is “no place for hate in our community.” Snapchat announced that hate speech “will never be tolerated” on its platform. YouTube reassured us that helpful tools are on the way. Tech companies fled Trump’s dual business councils to protest his claim that some white supremacists are “very fine people.”

In other headlines, a coalition of web providers made a controversial and unprecedented choice to yank their services out from under the Daily Stormer, a white supremacist news site. Days later, Cloudflare abandoned the site to the whims of whoever feels like DDoSing it. Those decisions, part of the “no platforming” philosophy which would deny hate speech purveyors a place to assemble and share their views, will likely have many reverberations in the days to come. For now, some things remain very much the same.

Unfortunately, while this week’s burst of industry energy might suggest otherwise, hate groups are alive and well, making little if any effort to conceal their presence on all of the major social networks. Whether it’s 4chan or Facebook, if you go looking for hate online, you’ll find it. Dredging up racist, anti-Semitic content often in seeming violation of a company’s stated policy takes seconds — trust me, I went looking.

On something like Facebook, hate festers just under the paper-thin layer between a user’s social sphere and the platform at large. On a network like Twitter, it’s right on the surface, bobbing unpleasantly along down the stream with dog photos and journalist chatter. For anyone surprised about the terrible events that unfolded in Charlottesville: You can find hate anywhere you look and you don’t have to look very hard.

I took a grim tour around some of the major social sites into which we sink our hours to see what just a little bit of casual searching could find — yet algorithms often can’t (or won’t). Again, this content is graphic and disturbing, but pretending it isn’t there won’t make it go away.

Facebook

On Facebook, white supremacist memes thrive, even in wide-open, public communities. Though plenty of hate just sits out in the open, some users skirt detection by using a kind of unsearchable, far-right code language. Facebook might pick up on the anti-Semitic slur “kike,” but by swapping that for “kayak” the content flies under the radar. I was surprised to see that surrounding words in multiple parentheses, also called an “echo,” remains common practice to denote something or someone as Jewish. These symbols were established as part of the shallowly submerged white supremacy lexicon more than a year ago.

References to 1488 also remain common, where 14 is a nod to the “14 words” or “We must secure the existence of our people and a future for white children,” a popular mantra with white supremacists and white nationalists. The double 88 is usually a nod to the 8th letter of the alphabet, or “heil Hitler.”

Small waves of white supremacist memes crest and fall, and much like Facebook’s fake news problem, each wave has another set right behind it and there are many oceans. When I spent some time looking through these communities this week, a particularly popular meme remixed the incredible violence of a counter-protester rolling off of a now infamous ash-gray Dodge Charger with a broad array of anti-black racist memes, some of them drawing from popular mainstream memes, like “the floor is” joke. Another pictured George Washington driving the Charger through the crowd.

One public community I found easily hosted a live stream of Saturday’s white supremacist rally in Charlottesville, the full video shot from the perspective of one of the torch-bearing attendees. It felt surprising that so much of this content was just sitting right out in the open on a social network that connects faces to names.

Following Charlottesville, Facebook cracked down, removing a slew of white supremacist and white nationalist pages. Among them: Right Winged Knight, Right Wing Death Squad, Awakening Red Pill, Physical Removal, Genuine Donald Trump, Awakened Masses, White Nationalists United, Vanguard America, Radical Agenda: Common Sense Extremism and the personal page of Chris Cantwell. Many, many others remain as Facebook continues to rely on users flagging content themselves — a deeply flawed method that’s proven far more effective as a tool for harassing LGBTQ users and black activists than ridding the platform of hate.

In his statement on Wednesday, Zuckerberg did not meaningfully clarify how Facebook will determine what stays on its platform and what goes. Though he noted that “when someone tries to silence others or attacks them based on who they are or what they believe, that hurts us all and is unacceptable,” it does not appear to be unacceptable on Facebook.

Asked how its policy might be evolving, Facebook told me that it does not tolerate hate speech or posts praising acts of violence or hate groups on its platform. This policy, like all policies, is open to interpretation and it’s possible that interpretation could shift further over time.

Reddit

In spite of Reddit’s mostly hands-off policy and reliance on subreddit-specific moderators, racism on Reddit often takes quirkier forms meant to avoid potential detection. In true Reddit style, overtly racist posts and comments are often played off as self-parodies, draping a thin layer of self-referential humor over what is usually just outright white supremacy. On one thread, users enthusiastically counted up from the number 1,488,000. On subreddits like /r/greentext, users post screencaps of posts from 4chan, host of some popular far right and white supremacist communities. They’re careful not to post links to 4chan itself and by screencapping they can avoid searchable text while still replicating most of the content.

In late 2015, Reddit rid itself of some popular openly white supremacist subcommunities like /r/coontown during a prominent sweep, but remarkably, pages like /r/blackpeoplehate live on. Reddit now classifies its most objectionable content as under “quarantine” and requires a verified email address to access it. Like YouTube, which took a similar approach of walling off some content, Reddit “will generate no revenue, including ads or Reddit Gold,” from these subreddits. They live on in a state of partial suspended animation.

Following the violence in Charlottesville, Reddit told me that it banned /r/physical_removal for “a violation of our content policy, specifically, the posting of content that incites violence.” The company appears responsive to user-generated campaigns when they draw sufficient attention to an issue, which appears to be the goal of /r/AgainstHateSubreddits, a compendium of Redditor-reported hate speech.

YouTube

Initially, YouTube’s search made finding white supremacist stuff kind of hard. Given Google’s web search prowess it makes sense that the company would do a better job of burying objectionable content than a site like Facebook, but it wasn’t buried very deep. After a few searches didn’t turn up much, I struck Nazis on a video that prominently displayed a 1488 with a slew of links to the Daily Stormer.

Because it’s an entertainment site as much as a social network, many of my search results were home-brewed music videos depicting Nazi imagery with little or no context. A cursory glance at the user names and links was the only overt hint, with, again, many, many 1488s. Some more narrative racism came with disclaimers that the content was satire or just a joke.

Elsewhere, content drawn directly from 4chan’s infamous far right hub /pol/  (short for “politically incorrect”) was repurposed on a more mainstream platform. Because YouTube, like many of these sites, offers recommended content related to what you’re viewing, stumbling onto a little bit of white supremacy opens up a cascading slide of swastikas and racial epithets. Just a few clicks away from a music video declaring whites the master race I ran into a video created by “fashygamer1488” with the following text:

“Hey goys, its [me] here with another video, please write ur comments below, no (((jews))) or googles allowed (Google is a secret alt-right codeword that means the N word lol)…”

Again, racial slurs are traded for common, unsearchable words to keep the content just barely underground.

In June, YouTube followed Reddit’s example, creating a separate class of objectionable content that it would no longer monetize. This followed a corporate outcry from brands concerned that their ads were being served along with videos containing hate speech. In just a little bit of time spent browsing YouTube’s white supremacist content, I did not run into anything that set this content apart from the rest of its videos, though YouTube has said that feature is coming “soon” and that the “videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes.”

For now, the suggestion engine hums along, pointing me toward a selection of Hitler youth haircut instructional videos.

Fashy haircuts

Twitter

Twitter is more responsive as a search engine than something like Facebook, but the search results are often messily curated. My first search for 1488 quickly pulls up tweets like a picture of a white, blue-eyed baby with the text “14 words” and a photo of Hitler. In other tweets, users with neo-Nazi black sun icons and hybrid Trump/Hitler background images call each other “fags” over who is and is not “boomerposting” (i.e. tweeting like a baby boomer).

Unsurprisingly, Twitter has it all. White supremacist demagogue Richard Spencer trying to remain relevant while his peers accuse him of being a Jew. Quotes hailing Trump’s off-the-rails presser that defended some white supremacists as “fine people.” Racist code words that reverse virtue-signal hate to anyone looking for a like-minded follow. Jokes about cars caked in Photoshopped blood. All of it sends the same message.

On Twitter, there is a lot, lot, lot of this content. It starts to run together.

Tech at a crossroads

These major platforms offer a taste of the toxicity flowing through mainstream social networks, but there are many others. After incubating this kind of stuff for ages, gaming chat platform Discord just finished a major purge. Tumblr, Instagram and Snapchat are fighting the same fight and it’s not clear they’re winning. Meanwhile, far right offshoots like Gab are specifically designed with sustainable white supremacy in mind. The absolute ubiquity of Nazi insignia, Stormfront links and shockingly violent memes would appear to undercut objections by the extreme right that their speech is being suppressed with much success at all.

Depending on how you use the internet, the fact that this stuff is so easy to find on major social networks could range anywhere from shocking to wholly unsurprising. But the truth is that most of us shy away from looking at it. For anyone who isn’t the target demographic, all of this hate is ugly and exhausting. We’d rather just rest easy knowing that tech companies are working on it and they’d rather we didn’t haul up more of this stuff — they’re working on it.

As we can see from tech ratcheting up its response following Charlottesville, no policy is set in stone. While companies often point users to policies around what does and doesn’t fly on their platforms, ultimately the decision to ban content is a subjective response to getting too much heat. Given that willingness to bend to public sentiment, corporate pressure and user-driven anti-hate campaigns are proving themselves to be powerful tools, even if it’s not clear where exactly to draw the line. Racial slurs? Nazi insignia? Overt threats of specific violence? For tech, the coming weeks will be a bellwether.

Anywhere you go, white supremacist content has a foothold if not an entire underground compound bedecked in red and black — one that remains even after the Charlottesville backlash. All one needs to do is look. Whether tech companies choose to see is a different matter altogether.

YouTube has an illegal TV streaming problem

Most people turn to Netflix to binge watch full seasons of a single TV show, but there could be a much cheaper way: YouTube

You might be surprised to learn that you can watch full episodes of popular TV shows on YouTube for free, thanks to a large number of rogue accounts that are hosting illegal live streams of shows.

Do you love King of the Hill? Easy. Just choose which episode you like best. The Simpsons? Plenty to pick from there, too. Or, maybe you’re looking for some football? You can watch a livestream replay of the latest game easily, as if the NFL’s draconian intellectual property rules mean absolutely nothing. 

Perhaps the most shocking thing about these free (and very illegal) TV live streams might even make their way into your suggested video queue, if you watch enough “random shit” and Bobby Hill quote compilations on the site, as Mashable business editor Jason Abbruzzese recently experienced.   

He first noticed the surprisingly high number of illegal TV streaming accounts on his YouTube homepage, which has tailored recommended videos based on his viewing habits. Personalized recommendations aren’t exactly new — but the number of illegal live streams broadcasting copyrighted material on a loop was a shocker.

Jason's YouTube landing page.

Jason’s YouTube landing page.

Image: screenshot/Youtube/jason abbruzzese

When we looked deeper into the livestreams, the number we found was mindblowing. Many of these accounts appear to exist solely to give watchers an endless loop of their favorite shows and only have a few other posts related to the live streamed content.

What’s really strange is that there appears to be no obvious incentive for doing this, either. We can basically rule out doing it for ad money because you have to apply to be part of the YouTube Partner Program to earn any ad revenue. Your channel also needs 10,000 views to be eligible to apply, and YouTube has to approve of every account that makes it through, so none of these accounts have a chance to pass. 

The audiences watching these channels are actually pretty small compared to other popular channels, too. The largest number of viewers we witnessed was just over a thousand, while many streams had only a few dozen people tuned in at any given time. Clearly, this isn’t the type of content that fosters the types of large communities found elsewhere on the site. 

We reached out to a few of these account holders directly on the platform, but haven’t heard back from anyone as of press time. 

The phenomenon seems to be rather ephemeral. Most of the accounts we viewed early in the day were shut down within just a few hours. Some of them survived for up to 20 hours after they were posted — but they were few and far between. 

This was a South Park 24/7 live stream account.

This was a South Park 24/7 live stream account.

Image: screenshot/youtube

YouTube does its best to make it easy for people to report illegal copyright streams, which could be why the accounts are so often wiped from the site. 

First, copyright holders can formally notify YouTube that they believe their materials are being improperly hosted. YouTube then reviews the offending content, and pulls it down if it’s found to be infringing the copyright. Users who have multiple complaints against them can be banned from the platform entirely.

Second, there’s the nearly decade-old tool called Content ID, which allows the rights holders to manage their content more directly. Copyright holders provide reference files of their content to YouTube, which feed them into the system. The tool can be used for tracking, monetization, or outright blocking content that match the copyrighted materials. There are over 8,000 partners who use this tool, a vast majority who choose to allow materials to stay up. 

Video uploaders aren’t exactly hung totally out to dry here, either. If a video creator receives a takedown request, they can file a counter notification. Likewise with Content ID claims — YouTube creators can dispute those, too. 

Image: screenshot/youtube

We reached out to YouTube to ask about its stance on the livestreams, since the videos are so clearly outside the realm of copyright laws. 

“YouTube respects the rights of copyright holders and we’ve invested heavily in copyright and content management tools to give rights holders control of their content on YouTube,” a YouTube spokesperson told Mashable in an email. “When copyright holders work with us to provide reference files for their content, we ensure all live broadcasts are scanned for third party content, and we either pause or terminate streams when we find matches to third party content.”

We also reached out to 20th Century Fox (the copyright holder for King of the Hill, which we found to be a commonly streamed show), but its reps had no comment on the matter.  

It looks like those live streams that caught our eye are just another quirk of the live streaming video platform, which has morphed from an internet oddity for cat videos to a major streaming and music giant over the years. You might not always be able to watch your favorite shows on YouTube — especially if the copyright holder is persistent — but if you find a stream at the right moment, you might find some free binging where you least expect it. Just remember that there’s a good chance the person posting the copyrighted material is breaking the law.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f81195%2fa29e4b92 d4b1 4cd1 afeb a90b7d3c438a

Snap joins rivals Facebook and YouTube to fight terrorism


Snap Inc has joined the Global Internet Forum to Counter Terrorism, which sees consumer internet companies cooperating to stop the spread of terrorism and extremism online. Facebook, Google and YouTube, Microsoft and Twitter formed the GIFCT last month, and tomorrow it will host its first workshop with fellow tech companies plus government and non-governmental organizations.

The GIFCT started as an extension of the shared industry hash database that allows tech companies to share the digital fingerprints of extremist and terrorist content, such as photos and videos, so that once one identifies a piece of prohibited content, all the others can also block its upload. It’s almost like a vaccine program, where one company beats an infection, then shares how to produce antibodies with the rest of the group.

In identical blog posts published by Facebook, YouTube, Twitter and Microsoft, the GIFCT wrote “Our mission is to substantially disrupt terrorists’ ability to use the Internet in furthering their causes, while also respecting human rights.”

The first GIFCT workshop, held in San Francisco on August 1st, will host the United Kingdom Home Secretary Rt Hon Amber Rudd MP and United States Acting Secretary of Homeland Security Elaine Duke, plus representatives of the European Union, United Nations, Australia and Canada. The event’s goal is to formalize how the tech giants can collaborate with smaller companies, and what those companies would need as far as support to get involved.

In the coming months, the group’s goals include adding three more tech companies to the hash sharing program beyond new members Snap and JustPaste.it, get 50 companies to share their best practices for countering extremism through the Tech Against Terrorism project and plan four knowledge-sharing workshops.

Improving automated moderation and deletion of terrorist content is critical to preventing it from slipping through the cracks. While internet giants like Facebook typically employ thousands of contractors to sift through reported content, they often have to work extraordinarily fast through endless queues of disturbing imagery than can leave them emotionally damaged. Using shared hash database and best practices could relieve humans of some of this tough work while potentially improving the speed and accuracy with which terrorist propaganda is removed.

It’s good to see Facebook and Snap putting aside their differences for a good cause. While Snap is notorious for its secrecy, and Facebook for its copying of competitors, the GIFCT sees them openly sharing data and strategies to limit the spread of terrorist propaganda online. There is plenty of nuance to determining where free speech ends and inciting violence begins, so cooperation could improve all the member companies’ processes.

Beyond banishing content purposefully shared by terrorists, there remains the question of how algorithmically sorted content feeds like Facebook and Twitter handle the non-stop flood of news about terrorist attacks. Humans are evolutionarily disposed to seek information about danger. But when we immerse ourselves into the tragic details of any terrorist attack around the world, we can start to perceive these attacks as more frequent and dangerous than they truly are.

As former Google design ethicist Tristan Harris discusses, social networks know that we’re drawn to content that makes us outraged. As the GIFCT evolves, it would be good to see it research how news and commentary about terrorism should best be handled by curation algorithms to permit free speech, unbiased distribution of information and discussion without exploiting tragedy for engagement.

Buffering icons of death, ranked

In a world without net neutrality, we’ll likely be seeing a lot more annoying loading animations around the web. 

These buffering icons are what you see when an app or video is loading, before you actually view it. And without net neutrality, internet service providers can make some sites load even more slowly (the horror) if their owners don’t pay up. That means more buffering hell.

If you have good, stable, and fast internet connections, you hardly see these — but many internet users around the world are not so lucky, and soon, you could be among them.

Since we could be seeing these buffering icons of doom a lot more often, we might as well be seeking the best out there. Designers, listen up. Below, you can find 10 loading animations, ranked from worst to best.

10. Google Chrome

The chrome buffering animation is just frustrating. There’s nothing fun to look at while you wait. It’s just a plain old circle that you stare at until you start banging on your keyboard.

9. imgur

The Imgur icon is a twist on the classic spinning wheel of death icons — it’s hypnotizing, but look at it for too long and you start to get dizzy. Be careful not to fall off your swivel chair.

8. Facebook

Image: facebook

Consider yourself blessed if you’ve never come across Facebook’s loading screen — a template of the News Feed where placeholder images animate as you wait for the site to load all your friends’ annoying, but irresistible updates. Get familiar with this one, because we all go to Facebook more times a day than we can count.

7. Messenger

We’ve all been there: your Messenger group chat is popping, everyone’s making jokes, you’re all roasting that one friend, and you have the perfect meme to share, but you can’t because you just see this.  

6. YouTube

The YouTube animation is like an old friend  — you may not hang out that much anymore, but you grew up together.

5. GoEatBomb

Like many other games, Go! Eat! Bomb! has an awesome loading screen. You start off with an egg that slowly starts to crack before a dinosaur eventually pops out.

4. Safari

Anyone who uses Safari is very much accustomed to the wrath of the rainbow wheel. It may look cute and bright, but don’t let it fool you. It’s pure evil and will torment you. “Rainbow wheeling” happens enough as it is, so just get ready to pull out even more hair if net neutrality disappears. 

3. Netflix

Whether you’re starting a new episode of the show you’re binging or tucking in for your third movie of the night, the Netflix loading animation is a comforting backdrop as your reflection stares back at you from your laptop screen.

2. Snapchat

Image: snapchat

Woohoo! Sending goofy pictures to friends and getting back even wonkier responses is so fun! That is, until the snaps won’t actually load and you’re staring at a purple spinning circle. If your Snapchat app won’t load, your friends won’t know how fun and cool you are, and you won’t be able to keep tabs on the minutiae of everyone else’s lives. That’s simply no fun at all. 

1. This Octopus-ish guy

Designed by artist Joshua Schaeffer, this little guy isn’t actually in any apps yet, but if we’re going to be waiting around all the time, it might as well be alongside something as pleasant as this calm, soothing sea creature.

Let’s face it: Even the cutest of these loading animations are frustrating when you see them too often. Learn how you can join the fight for net neutrality here

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f7%2fa04a211c ef5b 0650%2fthumb%2f00001