Google’s Real-Life Babelfish Will Translate The World

The tough thing about translation: You need someone who actually speaks both languages. Easy for Spanish to English, not so much for Swahili to Inuktitut. In the Plex by Steven Levy illustrates how Google’s machine translations will revolutionize human communication.

It was no coincidence that the man who eventually headed Google’s research division was the co-author of Artificial Intelligence: A Modern Approach, the standard textbook in the field. Peter Norvig had been in charge of the Computational Science Division at NASA’s facility in Ames, not far from Google. At the end of 2000, it was clear to Norvig that turmoil in the agency had put his programs in jeopardy, so he figured it was a good time to move. He had seen Larry Page speak some months before and sensed that Google’s obsession with data might present an opportunity for him. He sent an email to Page and got a quick reply — Norvig’s AI book had been assigned reading for one of Page’s courses. After arriving at Google, Norvig hired about a half-dozen people fairly quickly and put them to work on projects. He felt it would be ludicrous to have a separate division at Google that specialised in things like machine learning — instead, artificial intelligence should be spread everywhere in the company.

One of the things high on Google’s to-do list was translation, rendering the billions of words appearing online into the native language of any user in the world. By 2001, Google.com was already available in twenty-six languages. Page and Brin believed that artificial barriers such as language should not stand in the way of people’s access to information. Their thoughts were along the lines of the pioneer of machine translation, Warren Weaver, who said, “When I look at an article in Russian, I say, ‘This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.’ ” Google, in their minds, would decode every language on the planet. There had been previous attempts at online translation, notably a service dubbed Babel Fish that first appeared in 1995. Google’s own project, begun in 2001, had at its core a translation system licensed from another company — basically the same system that Yahoo and other competitors used. But the system was often so inaccurate that it seemed as though the translated words had been selected by throwing darts at a dictionary. Sergey Brin highlighted the problems at a 2004 meeting when he provided Google’s translation of a South Korean email from an enthusiastic fan of the company’s search technology. It read, “The sliced raw fish shoes it wishes. Google green onion thing!”

By the time Brin expressed his frustration with the email, Google had already identified a hiring target who would lead the company’s translations efforts – in a manner that solidified the artificial intelligence focus that Norvig saw early on at Google. Franz Och had focused on machine translations while earning his doctorate in computer science from the RWTH Aachen University in his native Germany and was continuing his work at the University of Southern California. After he gave a talk at Google in 2003, the company made him an offer. Och’s biggest worry was that Google was primarily a search company and its interest in machine translation was merely a flirtation. A conversation with Larry Page dissolved those worries. Google, Page told him, was committed to organizing all the information in the world, and translation was a necessary component. Och wasn’t sure how far you could push the system – could you really build for 20 language pairs? (In other words, if your system had twenty languages, could it translate any of those to any other?) That would be unprecedented. Page assured him that Google intended to invest heavily. “I said okay,” says Och, who joined Google in April 2004. “Now we have 506 language pairs, so it turned out it was worthwhile.”

Earlier efforts at machine translation usually began with human experts who knew both languages that would be involved in the transformation. They would incorporate the rules and structure of each language so they could break down the original input and know how to recast it in the second tongue. “That’s very time-consuming and very hard, because natural language is so complex and diverse and there are so many nuances to it,” says Och. But in the late 1980s some IBM computer scientists devised a new approach, called statistical machine translation, which Och embraced. “The basic idea is to learn from data,” he explains. “Provide the computer with large amounts of monolingual text, and the computer should figure out himself what those structures are.” The idea is to feed the computer massive amounts of data and let him (to adopt Och’s anthropomorphic pronoun) do the thinking. Essentially Google’s system created a “language model” for each tongue Och’s team examined. The next step was to work with texts in different languages that had already been translated and let the machines figure out the implicit algorithms that dictate how one language converts to another. “There are specific algorithms that learn how words and sentences correspond, that detect nuances in text and produce translation. The key thing is that the more data you have, the better the quality of the system,” says Och.

The most important data were pairs of documents that were skillfully translated from one language to another. Before the internet, the main source material for these translations had been corpuses such as UN documents that had been translated into multiple languages. But the web had produced an unbelievable treasure trove – and Google’s indexes made it easy for its engineers to mine billions of documents, unearthing even the most obscure efforts at translating one document or blog post from one language to another. Even an amateurish translation could provide some degree of knowledge, but Google’s algorithms could figure out which translations were the best by using the same principles that Google used to identify important websites. “At Google,” says Och, with dry understatement, “we have large amounts of data and the corresponding computation of resources we need to build very, very, very good systems.”

Och began with a small team that used the latter part of 2004 and early 2005 to build its systems and craft the algorithms. For the next few years, in fact, Google launched a minicrusade to sweep up the best minds in machine learning, essentially bolstering what was becoming an AI stronghold in the company. Och’s official role was as a scientist in Google’s research group, but it is indicative of Google’s view of research that no step was required to move beyond study into actual product implementation.

Because Och and his colleagues knew they would have access to an unprecedented amount of data, they worked from the ground up to create a new translation system. “One of the things we did was to build very, very, very large language models, much larger than anyone has ever built in the history of mankind.” Then they began to train the system. To measure progress, they used a statistical model that, given a series of words, would predict the word that came next. Each time they doubled the amount of training data, they got a .5 per cent boost in the metrics that measured success in the results. “So we just doubled it a bunch of times.” In order to get a reasonable translation, Och would say, you might feed something like a billion words to the model. But Google didn’t stop at a billion.

By mid-2005, Google’s team was ready to participate in the annual machine translation contest sponsored by the National Institute of Standards and Technology. At the beginning of the event, each competing team was given a series of texts and then had a couple of days for its computers to do the translation while government computers ran evaluations and scored the results. For some reason, NIST didn’t characterize the contest as one in which a participant is crowned champion, so Och was careful not to declare Google the winner. Instead, he says, “Our scores were better than the scores of everyone else.” One of the language pairs it was tested on involved Arabic. “We didn’t have an Arabic speaker on the team but did the very best machine translation.”

By not requiring native speakers, Google was free to provide translations to the most obscure language pairs. “You can always translate French to English or English to Spanish, but where else can you translate Hindi to Danish or Finnish or Norwegian?”

A long-term problem in computer science had been speech recognition — the ability of computers to hear and understand natural language. Google applied Och’s techniques to teaching its vast clusters of computers how to make sense of the things humans said. It set up a telephone number, 1-800-GOOG-411, and offered a free version of what the phone companies used to call directory assistance. You would say the name and city of the business you wanted to call, and Google would give the result and ask if you wanted to be connected. But it was not a one-way exchange. In return for giving you the number, Google learned how people spoke, and since it could tell if its guess was successful, it had feedback that told it where it went wrong. Just as with its search engine, Google was letting its users teach it about the world. “What convinced me to join Google was its ability to process large-scale information, particularly the feedback we get from users,” says Alfred Spector, who joined in 2008 to head Google’s research division. “That kind of machine learning has just not happened like it’s happened at Google.”

Over the years Google has evolved what it calls “a practical large scale machine learning system” that it has dubbed “Seti”. The name comes from the Search for Extra Terrestrial Intelligence, which scans the universe for evidence of life outside Earth; Google’s system also works on the scale of the universe as it searches for signals in its mirror world. Google’s indexes almost absurdly dwarf the biggest data sets formerly used in machine learning experiments. The most ambitious machine learning effort in the UCI KDD Archive of Large Data Sets for Data Mining Research and Experimentation is a set of 4 million instances used to detect fraud and intrusion detection. Google’s Seti learning system uses data sets with a mean training set size of 100 billion instances.

Google’s researchers would acknowledge that working with a learning system of this size put them into uncharted territory. The steady improvement of its learning system flirted with the consequences postulated by scientist and philosopher Raymond Kurzweil, who speculated about an impending “singularity” that would come when a massive computer system evolves its way to intelligence. Larry Page was an enthusiastic follower of Kurzweil and a key supporter of Kurzweil-inspired Singularity University, an educational enterprise that anticipates a day when humans will pass the consciousness baton to our inorganic progeny.

What does it mean to say that Google “knows” something? Does Google’s Seti system tell us that in the search for nonhuman intelligence we should not look to the skies but to the million-plus servers in Google’s data centers?

“That’s a very deep question,” says Spector. “Humans, really, are big bags of mostly water walking around with a lot of tubes and some neurons and all. But we’re knowledgeable. So now look at the Google cluster computing system. It’s a set of many heuristics, so it knows ‘vehicle’ is a synonym for ‘automobile and it knows that in french its voiture, and it knows it in German and every language. It knows these things. And it knows many more things that it’s learned from what people type.” He cited other things that Google knows: for example, Google had just introduced a new heuristic where it determined from your searches whether you might be contemplating suicide, in which case it would provide you with information on sources of aid. In this case, Google’s engine gleans predictive clues from its observations of human behaviour. They are formulated in Google’s virtual brain just as neurons are formed in our own wetware. Spector promised that Google would learn much, much more in coming years.

“Do these things rise to the level of knowledge?” he asks rhetorically. “My ten-year-olds believe it. They think Google knows a lot. If you asked anyone in their grade school class, I think the kids would say yes.”

What did Spector, a scientist, think?

“I’m afraid that it’s not a question that is amenable to a scientific answer,” he says. “I do think, however, loosely speaking, Google is knowledgeable. The question is, will we build a general-purpose intelligence which just sits there, looks around, then develops all those skills unto itself, no matter what they are, whether it’s medical diagnosis or . . .” Spector pauses. “That’s a long way off,” he says. “That will probably not be done within my career at Google.” (Spector was fifty-five at the time of the conversation in early 2010.)

“I think Larry would very much like to see that happen,” he adds.

In fact, Page had been thinking about such things for some time. Back in 2004, I asked Page and Brin what they saw as the future of Google search. “It will be included in people’s brains,” said Page. “When you think about something and don’t really know much about it, you will automatically get information.”

“That’s true,” said Brin. “Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now you go into your computer and type a phrase, but you can imagine that it could be easier in the future, that you can have just devices you talk into, or you can have computers that pay attention to what’s going on around them and suggest useful information.” “Somebody introduces themselves to you, and your watch goes to your web page,” said Page. “Or if you met this person two years ago, this is what they said to you.” Later in the conversation Page said, “Eventually you’ll have the implant, where if you think about a fact, it will just tell you the answer.”

It was a fantastic vision, straight out of science fiction. But Page was making remarkable progress — except for the implant. When asked in early 2010 what will come next for search, he said that Google will know about your preferences and find you things that you don’t know about but want to know about. So even if you don’t know what you’re looking for, Google will tell you.

What Page didn’t mention was how far along Google was on that path. Ben Gomes, one of the original search rock stars, showed a visitor something he was working on called “Search-as-You-Type.” Other internal names for it were “psychic” and “Miss Cleo,” in tribute to a television fortune-teller. As the more prosaic name implied, this feature enables search to start delivering results even before you finish typing the query. He started typing “finger shoes” — the term that people often use to describe the kind of footwear Sergey Brin often sports, rubberized slippers with individual sleeves that fit toes the way gloves fit your fingers. Of course, Google search, with all the synonyms and knowledge fed to it by billions of searchers who clicked long and those who clicked short, knew what he was talking about. Gomes hadn’t finished typing the second word before the page filled with links — and ads! — confidently assuming that he wanted information, and maybe a buying opportunity, involving “Vibram Five Fingers, the barefoot alternative.” “It’s a weird connection between your brain and the results,” Gomes said. (In September 2010, Google introduced this product as “Google Instant.”)

“Search is going to get more and more magical,” says search engineer Johanna Wright. “We’re going to get so much better at it that we’ll do things that people can’t even imagine.” She mentioned one example of a demo being passed around. “Say you type in ‘hamburger.’ Right now, Google will show you hamburger recipes. But we’re going to show you menus and reviews of where you can get a hamburger near you, which is great for anyone living in a place where there are restaurants. I call this project Blueberry Pancakes because if I want to check those out, it’ll tell me about the pancake house in Los Altos, and I’ll go there. It’s just another example of where we’re going — Google’s just going to really understand you better and solve many, many, many more of your needs.”

That would put Google in the driver’s seat on many decisions, large and small, that people make in the course of a day and their lives. Remember, more than 70 percent of searches in the United States are Google searches, and in some countries the percentage is higher. That represents a lot of power for the company founded by two graduate students just over a decade ago. “In some sense we’re responsible for people finding what they need,” says Udi Manber. “Whenever they don’t find it, it’s our fault. It’s a huge responsibility. It’s like we’re doctors who are responsible for life.”

Maybe, it was suggested to Manber, however well intentioned Google’s brainiacs were, it was not necessarily a good thing for any single entity to have the answer, whether it was hardwired to your brain or not.

“It may surprise you,” says Udi Manber, “but I completely agree with that. And it scares the hell out of me.”

“From IN THE PLEX by Steven Levy. Copyright © 2011 by Steven Levy. Published by Simon & Schuster, Inc. Reprinted by permission.”

Original artwork by Gizmodo guest artist Chris “Powerpig” McVeigh. You can check him out on Flickr or Facebook. Or both!

Steven Levy is a senior writer at Wired magazine. He was formerly a senior editor and chief technology writer at Newsweek magazine. He has written on technology for a wide variety of publications, including Rolling Stone, The New Yorker, The New York Times.

In The Plex: How Google Thinks, Works, and Shapes Our Lives is available from Amazon.com


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.