On January 18, the American website medium.com published an article about how Google has been constantly innovating and developing three important components: knowledge graph, voice search, and Google Now in order to maintain its leading position in the search field. The following is the main content of the article: Why is the sky blue? This is a question that children often ask, and most parents are unable to answer independently. Not too long ago, finding the right answer would have required at least a visit to an encyclopedia, or even a trip to the library. In recent years, parents only need to run to their computer, log on to Google, evaluate the links related to the question, quickly browse the explanation, deeply understand the question, and give their children a satisfactory answer. But by 2015, even this seemingly accelerated process was failing to live up to expectations. For one thing, questions were being asked more verbally, directly into mobile devices, rather than spelled into search boxes. For another, while selecting the most relevant link from a ranked list of links was still an effective answer for most questions, those who wanted a clear answer came to expect it quickly. When Google couldn't deliver, they got disappointed, even angry. So… “Google, why is the sky blue?” It took the Android phone less than a second to respond to the spoken question — in a tone that was understandable but clearly automated. "On a clear, cloudless day, the sky appears blue because particles in the atmosphere scatter blue light from the sun more than red light." Amit Singhal, who leads Google’s search team, uses this explanation to help justify how the world’s most popular search engine, by far, has reinvented itself over the past few years. In interviews I’ve conducted with him over the past few years, Singhal likened the introduction of major changes to changing engines in mid-flight, as Google alters its algorithmic “flight plan” to improve rankings, add new information material (like images, books, or travel), or start searching before users have even finished typing their questions. Over the past few years, though, Google has changed not only the engines of its planes, but also much of the cockpit. The unstoppable momentum of mobile (searches on phones and tablets are likely to surpass those on desktops in 2015) has prompted something even more significant: changes of this magnitude have necessitated a rethinking of the future mission. “We all had to think deeply about what search really means in a mobile world,” he said. “And when we thought about it, our heads were spinning.” Constant change Over the past 17 years, Google Search has evolved, often with celebratory blog posts and the occasional press release. (Although Google reverts to its usual sparseness when it comes to quantifying these changes—for years, the company has described the number of hints it uses to help rank search terms as “over 200”.) Because search remains the company’s flagship product—and the platform that supports search ads remains Google’s main source of revenue—Google has been working to improve the way billions of people find information. Over the past few years, the company has accelerated that pace through both short-term and long-term efforts to stay ahead of its competitors. Users shouldn’t miss out on some of the changes. Search is faster, content is fresher, and more social (though that was after the launch of Google Plus, when the company heavily promoted “social search,” a slogan you rarely hear now). The Google search page is different, too. “In the early days, it was much simpler — just the homepage and the search results,” says Tamar Yehoshua, one of the people who leads Google’s search experience. “Now, there are a lot of different features and products on the search results page.” Looking ahead, Google has been pushing the boundaries of artificial intelligence to build a giant "brain" that can better understand users and the world and provide accurate results before people even ask questions. But some critics say Google Search is going downhill. They complain about too many junk results, or that an overemphasis on newer information blocks out older, more relevant results. We've all heard that Google's vaunted "ten blue links" have been polluted by a slew of confusing and self-serving features, such as shopping, news, and multimedia results. (On the other hand, Microsoft Bing, Google's biggest competitor in the United States, boasts that it still relies heavily on the 10 blue links.) The headline of a story on the U.S. news aggregation site Buzzfeed last year said, "We are entering the worst period in modern history," and then directly accused "Google of declining usefulness." Singhal responded sharply. "The opposite is true," he said. "I looked into the complaints and found that there is some nostalgia. Our search is much better now than it was last year or two years ago." Singhal’s comments illustrate the pride and confidence that key figures in Google Search now have. A few years ago, even as Google believed that its search quality was unmatched, there were real concerns—fears that the company’s dominance might be eroded. Google was in the throes of a Facebook panic. “We don’t own these connections,” Singhal told me in 2011, apparently referring to Facebook’s network (Facebook banned Google from crawling its data). “I don’t know how information flows in these networks.” At the height of the mania, Singhal took on a kind of doomsayer role at Google, at one point stopping Vic Gundotra, the company’s head of social, to loudly point out that these closed networks might threaten Google’s existence. Singhal told me in an interview that if anyone suggested that this threat was not comparable to Google’s unrivaled power in search, he would respond, “When I was a kid, Pan Am looked very powerful.” But those fears are overblown; it's unlikely anyone would compare Google to the now-bankrupt airline. Facebook's Graph Search is still a fledgling product, but it's slowly gaining momentum and causing less disruption to Google. Bing, while a respectable competitor in terms of search quality, still has less than a fifth of the market. And while Google Plus didn't turn out to be the social networking hit the company had hoped for, it did succeed in getting a lot of search users to sign up. Dominance The threat Google faces now is not a closed network of powerful rivals, but that search now seems to have migrated from the web to apps (the move to mobile also represents a challenge to Google's search advertising revenue). Google believes that this rise of information inside apps can be conquered by it - after all, mobile developers also want their information to be known to the world. Since the fall of 2013, Google has launched a project called App Indexing, hoping to centralize data from mobile apps into its total index. However, the App Index does not currently include iOS apps, which is a big problem. "There is still a long way to go," said Lawrence Chang, product manager of App Indexing. "But we have built the basic structural units." For now, though, the challenges of the web app world haven’t affected Google’s dominance in search. The numbers are still staggeringly large. Google receives more than 3 billion search terms every day. In the United States, two-thirds of searches use Google—and globally, the company still enjoys similar dominance (the recent decline in market share is not primarily due to search quality issues, but to Yahoo’s agreement with Firefox to replace Google as the browser’s default search engine). Even more impressive, Google handles more than 80% of mobile searches. In 2013, Google suffered a five-minute outage that reduced network traffic by 40%. No search competitor has the infrastructure, talent or experience that Google has. Not many have the ambition to do what Google does. So while Google has been in the news for regulatory issues, the fortunes and misfortunes of Google Glass and YouTube’s teenage superstars, search has been innovating at a steady, high pace. In some ways, these innovations are just a continuation of the changes Google has made to search since its earliest days. On a micro basis, Google makes small tweaks to its algorithm, touting them in weekly press releases on search quality. Then, every two or three years, there are major updates to its ranking system, creating winners and losers among businesses that are highly relevant to keywords. The most recent was in 2013, with the introduction of an algorithm code-named Hummingbird. Ben Gomes, who has been Singhal’s deputy in search for the past decade, notes that Google has made more changes to search rankings in the past three years than in the previous 13 years. By all accounts, the biggest challenge has been adjusting to the shift from PCs to portable devices. Like many Internet companies, Google Search takes a mobile-centric approach. “Mobile has had a huge impact on how we think about design,” says Jon Wiley, lead designer at Google Search. One of the first things he did after taking over the search design leadership role was to bring the mobile and PC teams together. Initially, the idea was to put a lot of effort into mobile—now, he says, it’s all about thinking about search as a multi-device experience. Knowledge Graph Which of the major changes is the most significant? The Google search team makes no secret of this. Singhal, head of the search business, made it clear: "Of course it's the Knowledge Graph, as long as you start building it, you can slowly understand the real world. The second is voice input - because I can't type here," he said, gesturing to the Samsung smartwatch on his wrist. "We also realized that in addition to prediction, we need some science so that people don't have to ask questions all the time, so we developed Google Now." The Knowledge Graph builds the world's knowledge into a vast database. Voice Search brings voice to search. Google Now tells you the answer before you ask it. All three are closely tied to Google's focus on mobile. These components - and the way they work - have helped transform Google Search over the past three years: from a delivery system of "10 blue links" to something almost supernatural - a system that behaves not like a computer, but an intelligent knowledge repository that can intelligently interpret and meet your information needs. It has done everything before you even start looking. Google's acquisition of a company called MetaWeb in 2010 didn't get a lot of attention, but it turned out to be the key to one of the most significant changes in Google search history. MetaWeb was founded in 2005 by Danny Hillis, a well-known computer scientist and entrepreneur. Hillis had conceived a number of creative projects while running his company Applied Minds, but he decided that MetaWeb was so significant that he spun it off as a separate company. MetaWeb was founded in 2007 as one of the first major exploits of the so-called Semantic Web, a way of handling multiple databases so that their information could be easily read. "We were trying to create a global database of the world's information," Hillis said. Metaweb was widely seen as a competitor to Google because it could scan the Internet to answer questions. But after a few years in business and more than $50 million in funding, Hillis realized that the idea could only flourish if it joined a larger company: Google. At the time, Google was already providing some direct answers to user questions: If you typed in “Obama’s birthday,” it would display “August 14, 1961” at the top of the search results. But as Google explained in a July 2010 blog post announcing its acquisition of Metaweb, its search engine failed to answer questions like “Universities on the West Coast with tuition under $30,000” or “Actresses over 40 who have won at least one Oscar.” The blog post promised that Metaweb would help Google provide those answers. “When Google bought Metaweb, they knew that the concept of ‘things’ was going to be a very important part of search,” said Emily Moxley, a product manager who has been involved in the Metaweb project since 2011. “We thought this was a great way to quickly surface some quick facts and information about things that people care about.” In May 2012, Google launched this Metaweb material, calling it the Knowledge Graph. The project grew from 12 million entities to 500 million. The product can provide supplemental answers to search results as it sees fit: a number of key facts about the searched topic, placed to the right of the usual search rankings. It's kind of "I'm feeling lucky." Mosley cited the interstate highway system in Richmond, Virginia, as an example of how Google considers which terms should be included in the Knowledge Graph. Travelers heading into Florida from the northwest should be well aware of this situation - in the back of Richmond, Highway 95 splits, giving drivers the choice of continuing on the main north-south artery through downtown, or taking Highway 295, which bypasses the city on the periphery and rejoins Highway 95 south of Richmond. She explained that if a user provides a search term, Google expands it into alternative forms and synonyms, etc., and then provides an algorithmic test to see if it is relevant to the Knowledge Graph results. "Then the user might take the exit for 295 and say, 'OK, what possible Knowledge Graph content would be useful for this term?' - we search all the documents and give us relevant content. Then the user goes back to 95 and we say, 'OK, we think this content is useful enough, let's surface this information more prominently.'" In the more than two years since Google Search was integrated with the Knowledge Graph, the company has continued to improve the product (Google has not officially announced the percentage of terms that the Knowledge Graph answers, but it seems to be around 25%). Initially, the Knowledge Graph was more dynamic. But the product gradually mastered the learning ability of Google Search itself, analyzing the habits of users. Mosley gave the example of "Who played Barf in the movie Spaceballs?" The Knowledge Graph has been exposed to so many terms that it knows how to provide diagrams involving actors and movies - and does it very quickly. The Knowledge Graph has also made great strides in another important area: freshness. Because Google assumes that there is one correct answer to a question, its information must be up to date. Otherwise, the answer will be wrong, which can be worse for users than no answer at all. When the Knowledge Graph was launched in 2012, a change in one of its entities—such as Volkswagen's decision to hire a new CEO—might take up to two weeks for the system to reflect the change, Mosley said. Now the system can process the news and adjust in minutes. But she acknowledged that this particular "Volkswagen CEO" entry was both a success and a failure for the Knowledge Graph. The new CEO won't officially take office for several months. The Knowledge Graph still displays the current leader, but many users who type "Volkswagen CEO" into Google may be looking for information about the successor. So even though the Knowledge Graph is correct, its response may not satisfy users. Google still has a lot of work to do. Adding more fields and industries starts with that; the project recently added knowledge about cars, video games, and Hugo Award winners. But she says Google is trying to figure out how to provide more complex results—not just quick facts, but more subjective, fuzzy associations. “People are not just interested in facts,” she says. “They’re interested in subjective things like whether this TV show was good. Those things can help take the ‘knowledge graph’ to the next level.” It’s as if Google doesn’t want users to feel like they’re doing a mechanical search but rather consulting a sage who not only knows everything but also has his own independent opinions on culture. But there’s still a long way to go, and as expectations for the information the Knowledge Graph provides rise, its mistakes continue to frustrate users. Mosley was recently troubled when she realized that the Knowledge Graph knew about TV shows but lacked information about new shows and when they’re airing. “I want to have alerts that tell me there’s a new show coming out this week, and I want to know what website is showing it so I can go and watch it,” she said, promising that eventually Google will get through this “middle stage,” even if the project hasn’t yet catalogued everything. Speaking of raising expectations, perhaps the most glaring errors of the Knowledge Graph are the two questions Google mentioned when it acquired Metaweb in the summer of 2010. Four years later, its search engine still fails to provide one-stop answers to questions such as "Universities on the West Coast of the United States with tuition under $30,000" or "Actresses over 40 who have won at least one Oscar." Voice Input As Google realized that mobile technology was going to become ubiquitous, it decided to make a subtle but significant change to its search business. Instead of thinking of search terms as instructions to a computer system, the company started thinking of everything typed as a conversation. "Once you have this type of device (he holds up a phone as an example), it's clear that voice input is going to be very important," Ben Gomez said. "It's also clear that speaking is more natural than typing." This adjustment involves more than just changing the way search engines process terms. It means changing us. We can now think of the search box—whether on a desktop or on a mobile device—as an object with which we can have a conversation. "Before Google, people didn't have the concept of terms, and we spent years teaching people to use terms," said Tamar Yehoshua. "But wouldn't it be easier if you could just communicate in a normal way, without having to think too hard? That would be a beautiful scene." Making this change requires two things. First, Google's search engine must be enhanced to listen more carefully and analyze spoken input. Then Google must ensure that when users speak into their phones - or communicate through text in the search box - its system understands what users are saying. Indeed, Google has been working on speech recognition for some time. “We certainly knew for many years that these foundational pieces—speech, natural language processing—were going to be important,” Yehoshua says. “We knew that those were investments, that those were unsolved problems in the tech world, and that it would take years to get there.” In the early 2000s, Google had a service called Google 411 that did the same thing that phone companies do when users dialed (paid) phone numbers. Google used those millions of free calls to learn how to correctly interpret speech in different languages and accents. That was useful, but in some places Google didn’t have the roughly 2,500 phrase samples it needed to analyze voice input. So the company began sending small teams to various regions, posting on Google’s network beforehand that it wanted to collect speech samples. The effort in Indonesia was representative. “On the second day, 900 people showed up,” says Linne Ha, a Google speech expert. When Google conducts these studies, it chooses to collect data in field conditions that match the region: recording objects on the streets of Hong Kong and in the subways of Paris, for example. The efforts have paid off — Google Search supports 159 languages, and Voice Search now supports 58 of them. Google claims the app’s “wrong word rate” has been reduced to 8%. Gomez proudly points to a milestone in Google's evolution: He now does voice demonstrations himself. "My accent is very off," says the Indian-born engineer. "My vowels are American, but I don't pronounce my R's." Before the Voice project, Gomez had never personally demonstrated Google's speech recognition efforts: Instead, the company used an in-house expert who had a pure American accent and could work harmoniously with the machine. Now Gomez has lost touch with that expert. "He's no longer important to the job," he says. "I can do the demonstrations myself. Let me sit in front of a reporter and do the voice input myself, and I'm not afraid of that." Google also had to consider how the phone would respond to the user. Should it be anthropomorphic, like Siri, or use an obvious robotic tone so that the user knows they are talking to the system? Google chose the latter. Joan Wiley, the company's search design director, said that in order to successfully achieve the illusion of talking to a conscious individual, they must achieve Pixar-like storytelling capabilities. "I think there is still a long way to go from computers having personality to humans communicating naturally with them." But technology has advanced quickly enough to allow Google (and, of course, other companies) to achieve the level of voice interaction that researchers have dreamed of for decades. "I think there are three or four things that have made this possible," Gomez said. "Obviously, computers have gotten faster and more powerful. The hardware—the microphones—have gotten better. There have been advances in the software algorithms. But the biggest change has been our ability to understand language." Fernando Pereira, a Distinguished Research Scientist in the search division who has worked on natural language processing (NLP) for the past 30 years, says that over the years, Google has become very good at figuring out how to take a search term and match it with documents on the web and other repositories of information. “When you search, there’s a good chance that the words you use in your term will appear in some of the search results,” he says. But adding a database like the Knowledge Graph to the search engine brings new challenges and opportunities. “It becomes more difficult to anticipate whether the language you use will pair with the way the database is designed,” he notes. On the one hand, it's not easy. When Google receives a query like "Where do the Giants play?" it has to know a lot of things: The query refers to a sport, the stadium where the team plays, and so on. It also has to make a choice—is this the Giants of the baseball league or the football team? Does the user want to know where the team usually plays, such as at home, or where the team's game will be next week? Google uses signals from the query and the user's previous habits to determine the answer. "All of this thinking, all of this reasoning, was something that we couldn't do a few years ago," Pereira says. Once those hurdles are cleared, Google’s NLP systems can be further enhanced with the “knowledge graph.” “We start to understand things in the world,” Gomez said. This allows Google to correctly guess what a user is asking, even if the entry is misspelled or the language is not quite right. Gomez pointed out that if someone says “David Cameron” to a phone, for example, the system already knows that the two words are usually paired together and that they are male—so it can use the alternative “he.” Even if the microphone doesn’t quite pick up the last name that’s said later, the “knowledge graph” can figure out that it’s referring to the British prime minister. The more Google knows, the better it knows about its users. Google Now In 2004, I asked Larry Page and Sergey Brin about their long-term vision for search. Page said search would become built into people’s brains. “When you think about something and don’t know too much about it, you automatically get information.” The key point, Brin said, is that “you can have a device that’s communicating with you, or a computer that’s paying attention to what’s going on around you, come up with useful information.” In 2010, two engineers working on Android, Baris Gultekin and Andrew Kirsme, embarked on an extracurricular project that was very much in line with that vision, and they created Google Now. According to Gultekin, who moved from Google Now to other projects at the company last year, the product sticks closely to the original pitch. "The core statement is, today people's phones aren't smart, but they can be," he said. "What if we could merge the capabilities of these powerful, internet-connected devices with the capabilities of Google?" In other words, Google Now can answer questions you haven’t asked yourself. That means combining information from multiple fields to solve something important. Gultekin says it was scary at first to create a system that could do that, but he and his partners began to break down how it could be done in computing. Even something as restrictive as that required the system to have substantive knowledge: the location of your home and office, the best routes, traffic information, and so on. It certainly helped if Google Maps knew how to navigate the road network—but that’s exactly where it matters. Google would use all its power to amplify this search tool. Soon, they had a great app to help commuters. “But we didn’t want it to be just a commuter app,” Gultekin says. “We wanted it to be a proactive assistant that could do a lot of things.” So they launched Google Now in July 2012, focusing on a few areas: commuting, flights, sports games, nearby places, travel, public transportation, and weather. Now it covers more than 70 areas and is growing rapidly. "My grand vision is that Google Now provides most of the information you need, and everything else becomes a fallback," Gultekin said. Google Now’s effectiveness depends on deeply integrating knowledge of the world (all of which Google Search and its “knowledge graph” can provide) with personal information. This is why one could argue that this subsector of search could be seen as a replacement for Google: Every time Google Now delivers a timely information card, it draws on a host of Google services. A typical information card will combine information from personal email, calendar, and contacts with bus routes, traffic information, and weather. Often, people don’t know exactly what Google Now does until it does it. For example, when you park your car, Google Now will record that you parked and remember where you parked it—in case you forget. If your email tells Google Now that you’re house-hunting, the service might send you photos of homes for sale or rent in the area where you’d like to live. As Google Now evolved, it grew from an extracurricular project to a full-time service. But its biggest boost may have come in 2011, when Apple launched Siri, which caused some concern at Google headquarters and attracted more resources to the voice-oriented project. It became an official part of the search department, although the team was actually under the purview of both the search and Android departments. "Search and Google Now are both to be praised," said Gultekin. "We want to provide users with information before they search, but in many cases, we don't know that your pipe burst and you need a plumber." (Of course, in the future, Google will know when your pipes burst or if your house is on fire, through its smart home business Nest. Gultekin said that Nest integration “may be in the future, but not now.”) Unlike traditional search, Google Now search only works if you use all Google products. "Larry (Page) has said that search should understand your intent and provide the information you need," Yehoshua said. "This is a Google ecosystem, and if you sign up and log in on your phone and PC, we can use it. If you want to get flight information and track your package and any information from Gmail, we will bring it to you. If you don't use Gmail, you won't get the information - but you can still use our voice service and get the answer." There's no other way around it -- if you want to use Google Now but Gmail isn't your default email, you're not getting the full value of Google Now or even Google Search. "It would be wonderful if we could share this information with everyone. I don't think that's going to happen tomorrow. Of course, there's going to be pressure from Apple to do that." Google very deliberately does not consider Google Now as a standalone product. Instead, the company considers it as a part of the search application. The application is not named after search, but "Google". This naming method not only shows that search is very closely associated with Google in the past, but also shows that Googe Now is very important to the company. Still, the Googe Now widget is optional. It comes with privacy warnings for everyone to use; the product seems all-knowing, but it’s also a reminder of how much the giant knows about us. As Google’s grip on personal information gets more troubling, especially in Europe, where governments are pushing for more regulation and fines, and even threatening to break it up, the company’s ambitions to serve the public could be thwarted by privacy concerns. Even for those who trust Google, the Snowden revelations show how easily governments can access our information. If Google Now knows where you parked your car, so do local intelligence services, right? Singhal argues that the first era of search was marked by people imagining that they were interacting with some distant machine. This new era has removed that barrier. We expect our phones to understand us. We expect search to be just as adept at providing answers involving our personal information as it is at mining information from web pages, documents, and public databases. "I think of search as the interface for all computing," Singhal said. "When devices disappear or get smaller, how will you interact with them? Because most of the time, you'll need to take an action -- something as simple as playing a song or as complex as writing a note to remind yourself to buy milk when you pass the nearby grocery store. Or you want to know if your wife's flight is on time? Or how tall is Barack Obama?" People might take it for granted -- they might even complain that Google Search isn't what it used to be. But Singhal believes that search has crossed a hurdle he's been grappling with for decades. "I couldn't do this in 20 years as a researcher," Singhal says, referring to what his team has accomplished for Google Search. He admits that there are still many problems to be solved. But he is full of pride when describing the science behind the terms. When someone asks a simple question like "Why is the sky blue?" Google is able to successfully respond. As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
<<: Is it to save the entire film market, or to save domestic films?
>>: Baidu did three things right in its transformation to mobile Internet
This article describes the typical characteristic...
Some time ago, many customers consulted the edito...
The production of most foods must comply with nat...
Children's cosmetics with heavy metal residue...
Wave power generation is a clean and efficient po...
Wang Sicong, a popular internet celebrity, said in...
“Why in the world does there exist such a thing a...
The Kirin 9000 chip used in the P50 Pro previousl...
"Will the existence of life make this planet...
The advantages of WeChat mini program investment ...
Activation and recall are also top priorities in ...
Mixed Knowledge Specialized in treating misunders...
In 2014, a mite named Paratarsotomus macropalpis ...
Foreign media TheStreet wrote that Google has a d...
As the leader of the mobile phone industry, iPhon...