All posts in “Journalism”

Journalists, look out: Google is funding the rise of the AI news machine in Europe

Did a computer write this story? Am I a replicant? I don't even know anymore.
Did a computer write this story? Am I a replicant? I don’t even know anymore.

Image: Shutterstock / Mopic

Human journalists, start the countdown to your obsolescence (if you haven’t already, that is). Your inevitable robot replacements are picking up steam — and more importantly, sponsorship — with another show of support from the world’s biggest tech company. 

Google’s Digital News Initiative (DNI), a fund that promotes innovation in digital journalism in Europe, just announced its latest round of sponsored projects. Among them is RADAR, a collaboration between the UK and Ireland’s Press Association (PA) and Urbs Media, a startup that creates localized news stories using AI.     

DNI awarded the RADAR project (which stands for Reporters And Data And Robots, natch) a €706,000 (roughly $804,000) grant. That influx of cash will be used to create a service that will ramp up automated news efforts, with natural language generation (NLG) AI programs pumping out up to 30,000 stories per month across localized distribution networks starting in 2018. All the components of the AI-created stories will be automated, from the words on the screen to the graphics and images that accompany them.

To start, the AI will be tasked with producing low-level stories from templates created by human writers. These types of stories are important, but not incredibly labor intensive — think sports recaps and earnings reports — and even a human writer would be filling in the blanks with the necessary data to create them, not using actual brain power for analysis, like what I’m doing right now. Journalists, like just about every type of professional, have some thankless, easily repeatable responsibilities. That’s where the AI comes in, at least to start. 

Due to the humdrum nature of the AI-produced stories, human writers and editors have no cause for concern about being replaced by robots, at least according to PA’s Editor-in-Chief Peter Clifton, who called RADAR a “fantastic step forward for PA,” in a release announcing the DNI sponsorship. “Skilled human journalists will still be vital in the process, but RADAR allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually,” he said. 

The project will require a team of five people to “identify, template, and edit data-driven stories,” about topics like crime, health, and employment, so humans aren’t eliminated from the equation completely — but under more traditional circumstances a five-person squad could never wrangle the full load of 30,000 stories per month. There’s also something to be said about how young writers might cut their teeth on the lowest levels of the copy desk, learning how to produce clean, accurate copy and absorbing the massive amounts of information they transcribe, which could provide context for bigger stories in the future. AI reporting could eliminate that essential experience.  

The DNI funding will also be used to create databases to for the AI to pull from, harnessing repositories of data from wider public databases in the UK like the National Health Service (NHS).  

While this is just the latest example of AI journalism sweeping the digital media industry, it’s not a new trend, by any means. Wordsmith, an NLG AI program, has been producing automated stories for the U.S.’s Associated Press (AP) since 2014, and other traditional outlets like the New York Times and Los Angeles Times have used automation for low-level reporting.

You might have even used AI to create your own content. Google’s Smart Reply feature was found to have been responsible for up to 10 percent of email responses sent from mobile devices earlier this year. It’s not as sophisticated as what RADAR will provide the PA — but it’s just another example of AI becoming more applicable and creeping into our daily lives.   

Journalists like me aren’t exactly mollified by all of the reassurances that the AI will simply be a tool to make our reporting more efficient. That might be the case right now — but what about the future, when the software becomes more sophisticated? 

Automation is a major future concern for just about every industry — and, ya know, capitalism as a whole — so when the machines finally rise in the not-so-distant-future, at least us lowly journalists might not be the only ones out of work. 

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f7%2fd6c3d835 b8b2 e588%2fthumb%2f00001

Google just launched a GIF maker to make your data look better

Google wants to help make your research look better.

To help journalists share their research and tell stories in a more visual and appealing way, Google just launched Data GIF Maker, a data visualization creator.

“Data visualizations are an essential storytelling tool in journalism, and though they are often intricate, they don’t have to be complex,” Google wrote in their announcement. ” In fact, with the growth of mobile devices as a primary method of consuming news, data visualizations can be simple images formatted for the device they appear on.”

The project came out of Google’s News Lab, an initiative to support journalists and storytelling. The lab also created the popular Google Trends project.

To make a data gif with Google’s new tool, simply add two terms, their titles, and an additional description (for our test, we used data from a 2014 study about how people pronounce internet terms): 

Image: google

The tool will then generate a handy gif like this:

Image: google

Right now the tool is very basic and currently supports comparisons between only two data points, meaning it’s not the best fit for complex data and comparisons. But for simple visualizations, Data Gif Maker is extremely easy to use. By adding some color and animation, journalists can make the research they’re trying to share a lot easier and more pleasant to consume, rather than listing the same information in text.

The tool is free to use and you can try it here.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f5%2f3169faf5 09a3 89a1%2fthumb%2f00001

Wikipedia founder Jimmy Wales is waging a war on fake news

More than 16 years after founding Wikipedia, the free online encyclopedia, Jimmy Wales is determined to reform another critical source of information: the news.

Conceptually similar to Wikipedia, the Wikitribune will be a collaborative effort between professional journalists and community contributors. The end goal? Creating a free, trusted source for news, based in facts and evidence.

Here’s how you create echo chambers on Facebook

Take it from the president of the United States: Human beings are creatures of comfort. We’re not particularly inclined to seek out contradictory information and would rather believe things that reinforce our worldview. 

Sometimes we post those things at 6 a.m. to make a point to our foes, even if our claims are fact-free. 

Most of the time, we pay a price for that willful ignorance, but researchers are on the brink of learning just how destructive those impulses can be when combined with the scale and power of Facebook. 

“People fall into groups where they reinforce their views and ignore dissenting views.”

We already know that people create echo chambers on the social media platform. A new study on how people consume news on Facebook shows how widespread those filter bubbles have become — and suggests that remedies for the spread of so-called fake news, like fact-checking and blacklisting offenders, won’t come close to fixing the problem. 

So blame the media and Facebook all you like for our dysfunctional politics and the collapse of bipartisanship, but the most formidable enemy appears to be human psychology. 

The study, published in Proceedings of the National Academy of Sciences, looked at the news consumption patterns of 376 million users over a six-year period from January 2010 until December 2015. The researchers analyzed how those users engaged with 920 English-language local and global news sources, including outlets like the New York Times, Guardian, Huffington Post, Daily Caller and Associated Press. (The sample was based on a list compiled by the European Media Monitor, and included government agency and nonprofit sites as well as notable omissions, such as Breitbart, Buzzfeed, and yes, Mashable.)

Even with dozens of news sources at their fingertips, users typically engaged with just a handful of outlets by liking their pages and commenting on their posts. The researchers found that the more active people are on Facebook, the more they consume news in clusters, basically walling themselves off with a single community of news outlets. 

“People fall into groups where they reinforce their views and ignore dissenting views,” says Walter Quattrociocchi, a principal investigator of the research and head of the Laboratory of Computational Social Science at IMT Lucca in Italy. 

Quattrociocchi and his study co-authors did not try to gauge the politics of either the news outlets or the readers. Instead, they argue that polarization dominates news consumption on Facebook because highly active users engage with a limited number of outlets. This dynamic, the researchers say, probably plays a significant role in the way misinformation spreads, though they did not specifically track it in this study.  

Basically, Quattrociocchi says, users are searching for narratives, exhibiting what’s known in psychology as selective exposure: They tend to favor information that fortifies their pre-existing views and avoid outlets that might challenge those beliefs. 

This isn’t too surprising, says S. Shyam Sundar, founder of the Media Effects Research Laboratory at Penn State University. Sundar was not affiliated with the new study.

“Communication researchers have long shown that media consumers are habitual in their usage patterns, often relying on the same set of news sources,” he wrote in an email. “We all have a repertoire of news outlets.” 

It’s also possible that Facebook users include ideologically different sources in their news reading habits, even if they only engage with a small number of outlets. 

“You look for the ideas [you agree with] and you refuse any kind of contact with something else.”

The difference in news consumption today, Quattrociocchi says, is that people now have access to countless publishers lacking journalistic credibility, and the power to immediately circulate questionable claims amongst dozens or hundreds of family and friends. 

Plus, 20 years ago, news outlets didn’t have to to compete against, for example, videos of kittens or vacation selfies, a dynamic the researchers note, and one that has created new incentives for publishers to present information in conversational or emotional terms. 

Facebook, Quattrociocchi says, has “destroyed the architecture of the media” and trying to stem the spread of so-called fake news is near-to-impossible because of how motivated people are to feel they’re right about the state of the world.

“Most of the proposals are related to debunking and fact checking,” says Quattrociocchi. “But the problem behind fake news and misinformation is the polarization of users. You look for the ideas [you agree with] and you refuse any kind of contact with something else.”

Sundar isn’t convinced of this claim, noting that it’s not supported by the study’s data: “We do not know if displaying warnings about disputed facts (as Facebook has just started to do) or showing fact-checked certifications on Facebook news feeds will be effective in stemming misinformation.” 

Nevertheless, Facebook seems to understand the echo chamber problem, and has arguably contributed to it by employing algorithms that haven’t frequently shown users links to news sources they may find disagreeable. Indeed, the more comfortable you feel on Facebook, the more likely you are to return. There’s little incentive for the company to make people squirm by positioning provocative news outlets high in their feeds for every visit.

“People will not exit the echo chamber. The problem is how we enter.” 

Even Mark Zuckerberg, Facebook’s CEO, gets that echo chambers on the platform wreak havoc on public discourse. In a recent post, his vision of an “informed community” on Facebook included tamping down on sensationalism and polarization. 

The company, for instance, had noticed that some people shared content with sensational headlines but had never actually read the story. The algorithm, it appears, will now take “signals” like that into account in order to both reduce “sensationalism” in people’s news feeds and identify the publishers responsible for that content. Facebook also recently rolled out tools to help users identify and dispute fake news items.  

Quattrociocchi, however, isn’t optimistic that fact-checking and debunking mechanisms will make a difference when it comes to believing and sharing conspiracy theories and falsehoods. 

“People will not exit the echo chamber,” he says. “The problem is how we enter.” 

Though third-party apps and browser extensions to burst filter bubbles have cropped up since the election, those require a lot of work for people who are mostly content inside their clusters. Quattrociocchi says that even being aware of your self-made Facebook echo chamber is an important first step, but he believes it’s up to companies and researchers to better understand people’s news consumption habits and reach them where they’re at. 

Quattrociocchi is currently exploring the cognitive and psychological traits that drive people to consume certain news narratives. One possibility, he says, is that the impersonal nature of online news sharing lowers personal accountability, making it easier to post and believe fake news. He also wants to test whether social media platforms create a kind of narcissistic distortion, influencing how people present themselves online. 

Regardless of what he finds, the message is becoming clearer: The problem with fake news has a lot more to do with our own psychology than we’d like to admit. 

UPDATE: March 6, 2017, 2:21 p.m. PST This story was updated to include comments from S. Shyam Sundar.