In his keynote address at Google’s annual conference for developers and journalists, CEO Sundar Pichai touted the AI tools the company is building to make search simpler. One of the core technologies in this effort is Google’s newly announced Multitask Unified Model (MUM), which aims to answer complex questions by synthesizing information from across the web into one coherent response from Google.
It’s a tantalizing idea. In a May 18 blog post, Pandu Nayak, Google’s vice president for search, gave the example of a hiker who has just hiked Mount Adams in Washington state and wants to know what they’ll have to do differently to hike Mount Fuji in Japan next fall. Nayak imagines a future in which the hiker could simply ask “what should I do differently to prepare?” and get a nuanced answer, drawn from multiple online sources, and neatly packaged in a natural-sounding reply from Google’s language AI.
MUM is part of Google’s long-term shift away from ranked search results and toward the creation of AI algorithms that can answer user questions faster—often without ever clicking a link or leaving Google’s results page. (Think, for example, of the “knowledge panels” that now appear at the top of many search results pages and display an answer from a website so you don’t have to visit the site yourself.) This shift promises to reduce the amount of work it takes to find information through Google. But it’s not clear that this is a problem in need of a solution.
Why less work isn’t always better
There are plenty of benefits to having people manually sift through search results, as Emily Bender, a University of Washington computational linguistics professor, points out in a recent Twitter thread. Human labor introduces human judgment into the process. “By clicking through to the underlying documents, the human is in a position to evaluate the trustworthiness of the information there,” she wrote. “Is this a source that I trust? Can I trace back where it comes from? Is it from a context that is congruent with my query?”
This process also drives traffic to the websites, like Investopedia or Epicurious, that first posted the information. That traffic generates advertising revenues that fund journalists, bloggers, recipe writers, and the whole universe of content creators who make things people want to see online. In the future, perhaps, Google will be forced to pay these people directly for the right to aggregate their work, as it currently does with news publishers in some countries. But in the meantime, doing more to keep searchers from leaving the results page will funnel money away from the independent publishers who generate the trustworthy information that Google and the rest of the web browsing world rely on.
There’s also a host of ethical and environmental concerns about the large language models Google needs to power tools like MUM, as Google ethics researchers Timnit Gebru and Margaret Mitchell pointed out in a paper they co-authored with University of Washington researchers. Large language models soak up hundreds of thousands of kilowatt-hours of electricity, contributing to climate change. They also soak up plenty of misinformation and hate speech from their training data, raising the risk that they’ll give people biased or false answers. AI has no ability to actually understand the words it is saying—but it has gotten quite good at parroting human speech, which can fool people into thinking the information they’re getting from an AI language model is more reliable than it really is.
In an era rife with viral lies, the world needs people to exercise more of their own judgment and critical thinking when searching for information online. In fact, Google recently rolled out a search improvement that does just that: The “about this result” feature gives searchers a quick way to learn more about each of the websites that appear in its search results. Features like these—which help humans do the work of vetting sources for themselves—serve searchers better than any attempt to outsource that labor to AI.