All posts in “Youtube”

Touchscreen Google Home device evidence spied in official app code


There’s now more evidence Google is testing a touchscreen Home device. AndroidPolice points to sections of code of the latest Google app that refers to a device that sports a new on-screen interface. The APK teardown of 7.14.15 beta version revealed a long list of on-screen menus and functions that are utilized by a device with the code-name of Quartz. These abilities include YouTube playback — a function the Amazon Echo Show recently lost.

In September we reported Google was working on a Google Home device that sported a touchscreen interface. Two sources confirmed the device has been internally codenamed “Manhattan” and will have a similar screen size to the 7-inch Echo Show. One source received info directly from a Google employee. Both sources say the device will offer YouTube, Google Assistant, Google Photos and video calling. It will also act as a smart hub that can control Nest and other smart home devices.

This report by AndroidPolice seemingly confirms many of those details. The code review revealed multiple on-display features, interactive timers, weather forecasts with 32 different icons, YouTube video playback and a basic web browser, along with photo galleries and Google Maps with business listings.

At this point Google has yet to confirm the existence of the device yet it makes sense Google is at least toying with the idea and internally testing such a device. Companies have long tried to build a central information hub of sorts with the Amazon Echo Show being the latest such device. Google is clearly following the Amazon Echo line step for step with the next obvious move being an Echo Show clone.

If you want to grow your business, start by increasing your social media following

Image: pixabay

You’ve seen the job listings: “we’re looking for a social media rockstar…” 

Social media is driving huge amounts of revenue for companies across industries, and the job market is reflecting that value. This Social Media Rockstar Bundle is one way to learn the skills that check recruiters’ boxes, so you can be that rockstar and land an awesome job in social media.

Here’s a breakdown of the included courses: 

The Ultimate YouTube Diva Course: Get Paid to Make Videos

Everyone and their mother seems to have a YouTube channel, but how many of them are actually successful? This six-hour course will teach you how to seamlessly upload videos made with professional equipment, gain followers using SEO tactics and Google AdWords, and create that tutorial you’ve been dying to make.

Learn the Secrets of Facebook Marketing Pros

Facebook is no longer just for reading bad political takes from your high school friends. This course will teach you how to get people to engage with your marketing page, and how to create posts with impact. 

Image: Pexels

Your A-Z Guide to Making Cash on Social Media

You have a Facebook, Twitter, Snapchat, Instagram, Pinterest, LinkedIn, Google+ and YouTube account. But what does it get you? With this guide, you’ll be able to build a brand that gets you actual followers and start making money off your content instead of just sending snaps of your desk lunch to mom.

The Complete Twitter Marketing Bootcamp 2017

This boot camp’s 31 lectures will teach you everything you need to know about attracting Twitter followers, scheduling tweets, implementing Twitter ads, and generating business leads. 

The Complete Instagram Marketing 2017 Training

Like the Twitter boot camp, this Instagram training guide will teach you how to stick to a daily posting schedule with a marketing game plan so you can attract and build relationships with followers. You’ll also learn about optimizing Instagram ads.

How To Use Snapchat For Marketing In 2017

These 32 lectures on Snapchat marketing will make you a pro in no time. Sure, you know the basics: filters, geotags, bitmoji… but this course will teach you how to build a following, integrate Snapchat with other social media platforms, and measure your success.

The Guide to Pinterest Marketing

Pinterest is no longer just for crafty DIYers. Learn how to use Pinterest as a marketing channel and get more than 1,000 repins on a single post. These lessons will teach you how to maximize your results from sponsored ads and drive engagement with pins.

Buying all of the courses in the Social Media Rockstar Bundle separately would set you back $1,387, but right now they’re available as a bundle for just $29. Mashable readers can also save an additional 50 percent off their order by using coupon code: BUNDLE50 at checkout.

Google shuts YouTube channel implicated in Kremlin political propaganda ops


A YouTube channel that had been implicated in Russia disinformation operations to target the U.S. 2016 election has been taken down by Google.

Earlier this week The Daily Beast claimed the channel, run by two black video bloggers calling themselves Williams and Kalvin Johnson, was part of Russian disinformation operations — saying this had been confirmed to it by investigators examining how social media platforms had been utilized in a broad campaign by Russia to try to influence US politics.

The two vloggers apparently had multiple social media accounts on other platforms. And their content was pulled from Facebook back in August after being identified as Russian-backed propaganda, according to the Daily Beast’s sources.

Videos posted to the YouTube channel, which was live until earlier this week, apparently focused on criticizing and abusing Hillary Clinton, including accusing her of being a racist as well as spreading various conspiracy theories about the Clintons, along with pro-Trump commentary.

The content appeared intended for an African American audience, although the videos did not gain significant traction on YouTube, according to The Daily Beast, which said they had only garnered “hundreds” of views prior to the channel being closed (vs the pair’s Facebook page having ~48,000 fans before it was closed, and videos uploaded there racking up “thousands” of views).

A Google spokesman ignored the specific questions we put to it about the YouTube channel, sending only this generic statement: “All videos uploaded to YouTube must comply with our Community Guidelines and we routinely remove videos flagged by our community that violate those policies. We also terminate the accounts of users who repeatedly violate our Guidelines or Terms of Service.”

So while the company appears to be confirming it took the channel down it’s not providing a specific reason beyond TOS violations at this stage. (And the offensive nature of the content offers more than enough justification for Google to shutter the channel.)

However, earlier this week the Washington Post reported that Google had uncovered evidence that Russian operatives spent money buying ads on its platform in an attempt to interfere in the 2016 U.S. election, citing people familiar with the investigation.

The New York Times also reported that Google has found accounts believed to be associated wth the Russian government — claiming Kremlin agents purchased $4,700 worth of search ads and more traditional display ads. It also said the company has found a separate $53,000 worth of ads with political material that were purchased from Russian internet addresses, building addresses or with Russian currency — though the newspaper’s source said it’s not clear whether the latter spend was definitively associated with the Russian government.

Google has yet to publicly confirm any of these reports. Though it has not denied them either. Its statement so far has been that: “We are taking a deeper look to investigate attempts to abuse our systems, working with researchers and other companies, and will provide assistance to ongoing inquiries.”

The company has been called to testify to a Senate Intelligence Committee on November 1, along with Facebook, and Twitter. The committee is examining how social media platforms may have been used by foreign actors to influence the 2016 US election.

Last month Facebook confirmed Russian agents had utilized its platform in an apparent attempt to sew social division across the U.S. — revealing it had found purchases worth around $100,000 in targeted advertising or some 3,000+ ads.

Twitter has also confirmed finding some evidence of Russian interference in the 2016 US election on its platform.

The wider question for all these user generated content platforms is how their stated preference for free speech (and hands off moderation) can co-exist with weaponized disinformation campaigns conducted by hostile foreign entities with apparently unfettered access to their platforms — especially given the disinformation does not appear limited to adverts, with content itself also being implicated (including, apparently, people being paid to create and post political disinformation).

User generated content platforms have not historically sold themselves on the pro quality of content they make available. Rather their USP has been the authenticity of the voices they offer access to (though it’s also fair to say they offer a conglomerate mix). But the question is what happens if social media users start to view that mix with increasing mistrust — as something that might be being deliberately adulterated or infiltrated by malicious elements?

The tech platforms’ lack of a stated editorial agenda of their own could result in the perception that the content they surface is biased anyway — and in ways many people might equally view with mistrust. The risk is the tech starts to looks like a fake news toolkit for mass manipulation.

YouTube has no patience for DIY ‘bump stock’ gun demos, bans them

Youtube has banned “how to” bump stock gun modification demos from the site after a Las Vegas gunman employed the device to rapidly fire upon thousands of country music fans, killing 58 and injuring hundreds more.

Although the ownership of automatically-firing machine guns has been banned in the U.S. since 1986, a bump stock is a device you can add to a gun that will allow it to release rapid bursts of bullets. This looks and sounds similar to a machine gun, but is technically (and legally) not a military-grade automatic weapon.

Last week, House Speaker Paul Ryan acknowledged the threat that bump stocks pose to innocent human lives, telling MSNBC the legality of the devices is “clearly…something we need to look into.” YouTube, however, doesn’t need to wait for grinding political discourse, and has modified its user guidelines to include bump stock enhancement as dangerous content, which the site prohibits. 

“In the wake of the recent tragedy in Las Vegas, we have taken a closer look at videos that demonstrate how to convert firearms to make them fire more quickly and we’ve expanded our existing policy to prohibit these videos,” a YouTube spokesperson told Mashable.

“In the wake of the recent tragedy in Las Vegas, we have taken a closer look at videos that demonstrate how to convert firearms to make them fire more quickly and we’ve expanded our existing policy to prohibit these videos.”

As shown on the site’s community guidelines, YouTube also bans material that intends to scam, threaten, and promote hate. With hundreds of thousands of videos being uploaded daily, YouTube relies upon its over one billion users to flag dangerous content — such as making a firearm shoot in rapid bursts — for review.

A YouTube “bump stock” search still brings up thousands of results, some of which are likely bumper stock tutorials that have yet to be removed. Many of the top videos are people showing off their bumper stocks, but not necessarily taking them apart and demonstrating the actual enhancement. One of the top videos, posted by GunsAmerica, has over 675,000 views and demonstrates that the “$99 Bump Stock Works!” It features a man firing a machine gun-like blast of bullets into a field. 

Anyone that now wants to add a bump stock to their gun or guns may have a more difficult time finding DIY guidance on YouTube, but the devices may still stay legal. 

Influential House Majority Whip Steve Scalise, after surviving a June assassination shooting attempt at a Washington D.C. baseball field, returned to Congress last week and resisted the notion of Congress banning bump stocks, over concerns that this will lead to a “slippery slope” on gun regulations.  

“They want to go out and limit the rights of gun owners,” Scalise told Meet the Press.

YouTube remains flooded with gun-related content, but doesn’t want gun owners using the site to transform their firearms into weapons of war. 

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f10%2fe234d9fb 9e7d fae5%2fthumb%2f00001

Google’s probe into Russian disinformation finds ad buys, report claims


Google has uncovered evidence that Russian operatives exploited its platforms in an attempt to interfere in the 2016 U.S. election, according to the Washington Post.

It says tens of thousands of dollars were spent on ads by Russian agents who were aiming to spread disinformation across Google’s products — including its video content platform YouTube but also via advertising associated with Google search, Gmail, and the company’s DoubleClick ad network.

The newspaper says its report is based on information provided by people familiar with Google’s investigation into whether Kremlin-affiliated entities sought to use its platforms to spread disinformation online.

Asked for confirmation of the report, a Google spokesman told us: “We have a set of strict ads policies including limits on political ad targeting and prohibitions on targeting based on race and religion. We are taking a deeper look to investigate attempts to abuse our systems, working with researchers and other companies, and will provide assistance to ongoing inquiries.”

So it’s telling that Google is not out-and-out denying the report — suggesting the company has indeed found something via its internal investigation, though isn’t ready to go public with whatever it’s unearthed as yet.

Google, Facebook, and Twitter have all been called to testify to a Senate Intelligence Committee on November 1 which is examining how social media platforms may have been used by foreign actors to influence the 2016 US election.

Last month Facebook confirmed Russian agents had utilized its platform in an apparent attempt to sew social division across the U.S. by purchasing $100,000 of targeted advertising (some 3,000+ ads — though the more pertinent question is how far Facebook’s platform organically spread the malicious content; Facebook has claimed only around 10M users saw the Russian ads, though others believe the actual figure is likely to be far higher.)

CEO Mark Zuckerberg has tried to get out ahead of the incoming political and regulatory tide by announcing, at the start of this month, that the company will make ad buys more transparent — even as the U.S. election agency is running a public consultation on whether to extend political ad disclosure rules to digital platforms.

(And, lest we forget, late last year he entirely dismissed the notion of Facebook influencing the election as “a pretty crazy idea” — words he’s since said he regrets.)

Safe to say, tech’s platform giants are now facing the political grilling of their lives, and on home soil, as well as the prospect of the kind of regulation they’ve always argued against finally being looped around them.

But perhaps their greatest potential danger is the risk of huge reputational damage if users learn to mistrust the information being algorithmically pushed at them — seeing instead something dubious that may even have actively malicious intent.

While much of the commentary around the US election social media probe has, thus far, focused on Facebook, all major tech platforms could well be implicated as paid aids for foreign entities trying to influence U.S. public opinion — or at least any/all whose business entails applying algorithms to order and distribute third party content at scale.

Just a few days ago, for instance, Facebook said it had found Russian ads on its photo sharing platform Instagram, too.

In Google’s case the company controls vastly powerful search ranking algorithms, as well as ordering user generated content on its massively popular video platform YouTube.

And late last year The Guardian suggested Google’s algorithmic search suggestions had been weaponized by an organized far right campaign — highlighting how its algorithms appeared to be promoting racist, nazi ideologies and misogyny in search results.

(Though criticism of tech platform algorithms being weaponized by fringe groups to drive skewed narratives into the mainstream dates back further still — such as to the #Gamergate fallout, in 2014, when we warned that popular online channels were being gamed to drive misogyny into the mainstream media and all over social media.)

Responding to The Guardian’s criticism of its algorithms last year, Google claimed: “Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs — as a company, we strongly value a diversity of perspectives, ideas and cultures.”

But it looks like the ability of tech giants to shrug off questions and concerns about their algorithmic operations — and how they may be being subverted by hostile entities — has drastically shrunk.

According to the Washington Post, the Russian buyers of Google ads do not appear to be from the same Kremlin-affiliated troll farm which bought ads on Facebook — which it suggests is a sign that the disinformation campaign could be “a much broader problem than Silicon Valley companies have unearthed so far”.

Late last month Twitter also said it had found hundreds of accounts linked to Russian operatives. And the newspaper’s sources claim that Google used developer access to Twitter’s firehose of historical tweet data to triangulate its own internal investigation into Kremlin ad buys — linking Russian Twitter accounts to accounts buying ads on its platform in order to identify malicious spend trickling into its own coffers.

A spokesman for Twitter declined to comment on this specific claim but pointed to a lengthy blog post it penned late last month — on “Russian Interference in 2016 US Election, Bots, & Misinformation”. In that Twitter disclosed that the RT (formerly Russia Today) news network spent almost $275,000 on U.S. ads on Twitter in 2016.

It also said that of the 450 accounts Facebook had shared as part of its review into Russian election interference Twitter had “concluded” that 22 had “corresponding accounts on Twitter” — which it also said had either been suspended (mostly) for spam or were suspended after being identified.

“Over the coming weeks and months, we’ll be rolling out several changes to the actions we take when we detect spammy or suspicious activity, including introducing new and escalating enforcements for suspicious logins, Tweets, and engagements, and shortening the amount of time suspicious accounts remain visible on Twitter while pending confirmation. These are not meant to be definitive solutions. We’ve been fighting against these issues for years, and as long as there are people trying to manipulate Twitter, we will be working hard to stop them,” Twitter added.

As in the case with the political (and sometimes commercial) pressure also being applied on tech platforms to speed up takedowns of online extremism, it seems logical that the platforms could improve internal efforts to thwart malicious use of their tools by sharing more information with each other.

In June Facebook, Microsoft, Google and Twitter collectively announced a new partnership aimed at reducing the accessibility of internet services to terrorists, for instance — dubbing it the Global Internet Forum to Counter Terrorism — and aiming to build on an earlier announcement of an industry database for sharing unique digital fingerprints to identify terrorist content.

But whether some similar kind of collaboration could emerge in future to try to collectively police political spending remains to be seen. Joining forces to tackle the spread of terrorist propaganda online may end up being trivially easy vs accurately identifying and publicly disclosing what is clearly a much broader spectrum of politicized content that’s, nonetheless, also been created with malicious intent.

According to the New York Times, Russia-bought ads that Facebook has so far handed over to Congress apparently included a diverse spectrum of politicized content, from pages for gun-rights supporters, to those supporting gay rights, to anti-immigrant pages, to pages that aimed to appeal to the African-American community — and even pages for animal lovers.

One thing is clear: Tech giants will not be able to get away with playing down the power of their platforms in public.

Not at the Congress hearing next month. And likely not for the foreseeable future.

Featured Image: Mikhail Metzel/Getty Images