There are many accounts of the genesis of Watson. The most popular, which is not necessarily the most accurate—and this is the sort of problem that Watson himself often stumbled on—begins in 2004, at a steakhouse near Poughkeepsie. One evening, an I.B.M. executive named Charles Lickel was having dinner there when he noticed that the tables around him had suddenly emptied out. Instead of finishing their sirloins, his fellow-diners had rushed to the bar to watch “Jeopardy!” This was deep into Ken Jennings’s seventy-four-game winning streak, and the crowd around the TV was rapt. Not long afterward, Lickel attended a brainstorming session in which participants were asked to come up with I.B.M.’s next “grand challenge.” The firm, he suggested, should take on Jennings.
I.B.M. had already fulfilled a similar “grand challenge” seven years earlier, with Deep Blue. The machine had bested Garry Kasparov, then the reigning world chess champion, in a six-game match. To most people, beating Kasparov at chess would seem a far more impressive feat than coming up with “Famous First Names,” say, or “State Birds.” But chess is a game of strictly defined rules. The open-endedness of “Jeopardy!”—indeed, its very goofiness—made it, for a machine, much more daunting
Lickel’s idea was batted around, rejected, and finally resurrected. In 2006, the task of building an automated “Jeopardy!” champion was assigned to a team working on question-answering technology, or QA. As Stephen Baker recounts in his book about the project, “Final Jeopardy,” progress was, at first, slow. Consider the following (actual) “Jeopardy!” clue: “In 1984, his grandson succeeded his daughter to become his country’s Prime Minister.” A person can quickly grasp that the clue points to the patriarch of a political family and, with luck, summon up “Who is Nehru?” For a computer, the sentence is a quagmire. Is what’s being sought a name? If so, is it the name of the grandson, the daughter, or the Prime Minister? Or is the question about geography or history?
Watson—basically a collection of processing cores—could be loaded with whole Wikipedias’ worth of information. But just to begin to search this enormous database Watson had to run through dozens of complicated algorithms, which his programmers referred to as his “parsing and semantic analysis suite.” This process yielded hundreds of “hypotheses” that could then be investigated.
After a year, many problems with Watson had been solved, but not the essential one. The computer took hours to generate answers that Jennings could find in an instant.
A year turned into two and then three. Watson’s hardware was upgraded. Benefitting from algorithms that allowed him to learn from his own mistakes, he became more proficient at parsing questions and judging the quality of potential answers. In 2009, I.B.M. began to test the machine against former, sub-Jennings “Jeopardy!” contestants. Watson defeated some, lost to others, and occasionally embarrassed his creators. In one round, in response to a question about nineteenth-century British literature, the computer proposed the eighties pop duo Pet Shop Boys when the answer was Oliver Twist. In another round, under the category “Just Say No,” Watson offered “What is fuck?” when the right response was “What is nein?”
I.B.M.’s aspirations for Watson went way beyond game shows. A computer that could cope with the messiness and the complexity of English could transform the tech world; one that could improve his own performance in the process could upend nearly everything else. Firms like Google, Microsoft, and Amazon were competing with I.B.M. to dominate the era of intelligent machines, and they continue to do so. For the companies involved, hundreds of billions of dollars are at stake, and the same could also be said for the rest of us. What business will want to hire a messy, complex carbon-based life form when a software tweak can get the job done just as well?
Ken Jennings, who might be described as the first person to be rendered redundant by Watson, couldn’t resist a dig at his rival when the two finally, as it were, faced off. In January, 2011, Jennings and another former champion, Brad Rutter, played a two-game match against the computer, which was filmed in a single day. Heading into the final “Final Jeopardy!,” the humans were so far behind that, for all intents and purposes, they were finished. All three contestants arrived at the correct response to the clue, which featured an obscure work of geography that inspired a nineteenth-century novelist. Beneath his answer—“Who is Bram Stoker?”—Jennings added a message: “I for one welcome our new computer overlords.”
How long will it be before you, too, lose your job to a computer? This question is taken up by a number of recent books, with titles that read like variations on a theme: “The Industries of the Future,” “The Future of the Professions,” “Inventing the Future.” Although the authors of these works are employed in disparate fields—law, finance, political theory—they arrive at more or less the same conclusion. How long? Not long.
“Could another person learn to do your job by studying a detailed record of everything you’ve done in the past?” Martin Ford, a software developer, asks early on in “Rise of the Robots: Technology and the Threat of a Jobless Future” (Basic Books). “Or could someone become proficient by repeating the tasks you’ve already completed, in the way that a student might take practice tests to prepare for an exam? If so, then there’s a good chance that an algorithm may someday be able to learn to do much, or all, of your job.”
Later, Ford notes, “A computer doesn’t need to replicate the entire spectrum of your intellectual capability in order to displace you from your job; it only needs to do the specific things you are paid to do.” He cites a 2013 study by researchers at Oxford, which concluded that nearly half of all occupations in the United States are “potentially automatable,” perhaps within “a decade or two.” (“Even the work of software engineers may soon largely be computerisable,” the study observed. )
The “threat of a jobless future” is, of course, an old one, almost as old as technology. The first, rudimentary knitting machine, known as a “stocking frame,” was invented in the late sixteenth century by a clergyman named William Lee. Seeking a patent for his invention, Lee demonstrated the machine for Elizabeth I. Concerned about throwing hand-knitters out of work, she refused to grant one. In the early nineteenth century, a more sophisticated version of the stocking frame became the focus of the Luddites’ rage; in towns like Liversedge and Middleton, in northern England, textile mills were looted. Parliament responded by declaring “frame breaking” a capital offense, and the machines kept coming. Each new technology displaced a new cast of workers: first knitters, then farmers, then machinists. The world as we know it today is a product of these successive waves of displacement, and of the social and artistic movements they inspired: Romanticism, socialism, progressivism, Communism.
Meanwhile, the global economy kept growing, in large part because of the new machines. As one occupation vanished, another came into being. Employment migrated from farms and mills to factories and offices to cubicles and call centers.
Economic history suggests that this basic pattern will continue, and that the jobs eliminated by Watson and his ilk will be balanced by those created in enterprises yet to be imagined—but not without a good deal of suffering. If nearly half the occupations in the U.S. are “potentially automatable,” and if this could play out within “a decade or two,” then we are looking at economic disruption on an unparalleled scale. Picture the entire Industrial Revolution compressed into the life span of a beagle.
And that’s assuming history repeats itself. What if it doesn’t? What if the jobs of the future are also potentially automatable?
“This time is always different where technology is concerned,” Ford observes. “That, after all, is the entire point of innovation.”
Jerry Kaplan is a computer scientist and entrepreneur who teaches at Stanford. In “Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence” (Yale), he notes that most workplaces are set up to suit the way people think. In a warehouse staffed by people, like items are stored near one another—mops next to brooms next to dustpans—so their location is easy for stock clerks to remember. Computers don’t need such mnemonics; they’re programmed to know where things are. So a warehouse organized for a robotic workforce can be arranged according to entirely different principles, with mops, say, stored next to glue guns because the two happen to be often ordered together.
“When most people think about automation, they usually have in mind only the simple replacement of labor or improving workers’ speed or productivity, not the more extensive disruption caused by process reengineering,” Kaplan writes. Process reëngineering means that, no matter how much the warehouse business expands, it’s not going to hire more humans, because they’ll just get in the way. It’s worth noting that in 2012 Amazon acquired a robotics company, called Kiva, for three-quarters of a billion dollars. The company’s squat orange bots look like microwave ovens with a grudge. They zip around on the ground, retrieving whole racks’ worth of merchandise. Amazon now deploys at least thirty thousand of them in its fulfillment centers. Speaking of the next wave of automation, Amazon’s chairman, Jeff Bezos, said recently, “It’s probably hard to overstate how big of an impact it’s going to have on society over the next twenty years.”
Not long ago, a team of researchers at Berkeley set out to design a robot that could fold towels. The machine they came up with looked a lot like Rosie, the robot maid on “The Jetsons,” minus the starched white cap. It had two cameras mounted on its “head” and two more between its arms. Each arm could rotate up and down and also sideways, and was equipped with a pincer-like “gripper” that could similarly rotate. The robot was supposed to turn a mess of towels into a neat stack. It quickly learned how to grasp the towels but had a much harder time locating the corners. When the researchers tested the robot on a pile of assorted towels, the results were, from a practical standpoint, disastrous. It took the robot an average of twenty-four and a half minutes to fold each towel, or ten hours to produce a stack of twenty-five.
Even as robots grow cleverer, some tasks continue to bewilder them. “At present, machines are not very good at walking up stairs, picking up a paper clip from the floor, or reading the emotional cues of a frustrated customer” is how the M.I.T. researchers Erik Brynjolfsson and Andrew McAfee put it, in “The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies” (Norton). Because we see the world through human eyes and feel it with human hands, robotic frustrations are hard for us to understand. But doing so is worth the effort, Brynjolfsson and McAfee contend, because machines and their foibles explain a lot about our current economic situation.
Imagine a matrix with two axes, manual versus cognitive and routine versus nonroutine. Jobs can then be arranged into four boxes: manual routine, manual nonroutine, and so on. (Two of Brynjolfsson and McAfee’s colleagues at M.I.T., Daron Acemoglu and David Autor, performed a formal version of this analysis in 2010.) Jobs on an assembly line fall into the manual-routine box, jobs in home health care into the manual-nonroutine box. Keeping track of inventory is in the cognitive-routine box; dreaming up an ad campaign is cognitive nonroutine.
The highest-paid jobs are clustered in the last box; managing a hedge fund, litigating a bankruptcy, and producing a TV show are all cognitive and nonroutine. Manual, nonroutine jobs, meanwhile, tend to be among the lowest paid—emptying bedpans, bussing tables, cleaning hotel rooms (and folding towels). Routine jobs on the factory floor or in payroll or accounting departments tend to fall in between. And it’s these middle-class jobs that robots have the easiest time laying their grippers on.
During the recent Presidential campaign, much was said—most of it critical—about trade deals like the North American Free Trade Agreement and the Trans-Pacific Partnership. The argument, made by both Bernie Sanders and Donald Trump, was that these deals have shafted middle-class workers by encouraging companies to move jobs to countries like China and Mexico, where wages are lower. Trump has vowed to renegotiate nafta and to withdraw from the T.P.P., and has threatened to slap tariffs on goods manufactured by American companies overseas. “Under a Trump Presidency, the American worker will finally have a President who will protect them and fight for them,” he has declared.
According to Brynjolfsson and McAfee, such talk misses the point: trying to save jobs by tearing up trade deals is like applying leeches to a head wound. Industries in China are being automated just as fast as, if not faster than, those in the U.S. Foxconn, the world’s largest contract-electronics company, which has become famous for its city-size factories and grim working conditions, plans to automate a third of its positions out of existence by 2020.The South China Morning Postrecently reported that, thanks to a significant investment in robots, the company already has succeeded in reducing the workforce at its plant in Kunshan, near Shanghai, from a hundred and ten thousand people to fifty thousand. “More companies are likely to follow suit,” a Kunshan official told the newspaper.
“If you look at the types of tasks that have been offshored in the past twenty years, you see that they tend to be relatively routine,” Brynjolfsson and McAfee write. “These are precisely the tasks that are easiest to automate.” Off-shoring jobs, they argue, is often just a “way station” on the road to eliminating them entirely.
In “Rise of the Robots,” Ford takes this argument one step further. He notes that a “significant ‘reshoring’ trend” is now under way. Reshoring reduces transportation costs and cuts down on the time required to bring new designs to market. But it doesn’t do much for employment, because the operations that are moving back to the U.S. are largely automated. This is the major reason that there is a reshoring trend; salaries are no longer an issue once you get rid of the salaried. Ford cites the example of a factory in Gaffney, South Carolina, that produces 2.5 million pounds of cotton yarn a week with fewer than a hundred and fifty workers. A story about the Gaffney factory in the Times ran under the headline “u.s. textile plants return, with floors largely empty of people.”
As recently as twenty years ago, Google didn’t exist, and as recently as thirty years ago it couldn’t have existed, since the Web didn’t exist. At the close of the third quarter of 2016, Google was valued at almost five hundred and fifty billion dollars and ranked as the world’s second-largest publicly traded company, by market capitalization. (The first was Apple.)
Google offers a vivid illustration of how new technologies create new opportunities. Two computer-science students at Stanford go looking for a research project, and the result, within two decades, is worth more than the G.D.P. of a country like Norway or Austria. But Google also illustrates how, in the age of automation, new wealth can be created without creating new jobs. Google employs about sixty thousand workers. General Motors, which has a tenth of the market capitalization, employs two hundred and fifteen thousand people. And this is G.M. post-Watson. In the late nineteen-seventies, the carmaker’s workforce numbered more than eight hundred thousand.
How much technology has contributed to the widening income gap in the U.S. is a matter of debate; some economists treat it as just one factor, others treat it as the determining factor. In either case, the trend line is ominous. Facebook is worth two hundred and seventy billion dollars and employs just thirteen thousand people. In 2014, Facebook acquired Whatsapp for twenty-two billion dollars. At that point, the messaging firm had a grand total of fifty-five employees. When a twenty-two-billion-dollar company can fit its entire workforce into a Greyhound bus, the concept of surplus labor would seem to have run its course.
Ford worries that we are headed toward an era of “techno-feudalism.” He imagines a plutocracy shut away “in gated communities or in elite cities, perhaps guarded by autonomous military robots and drones.” Under the old feudalism, the peasants were exploited; under the new arrangement, they’ll merely be superfluous. The best we can hope for, he suggests, is a collective form of semi-retirement. He recommends a guaranteed basic income for all, to be paid for with new taxes, levelled, at least in part, on the new gazillionaires.
To one degree or another, just about everyone writing on the topic shares this view. Jerry Kaplan proposes that the federal government create a 401(k)-like account for every ten-year-old in the U.S. Those who ultimately do find jobs could contribute some of their earnings to the accounts; those who don’t could perform volunteer work in return for government contributions. (What the volunteers would live off is a little unclear; Kaplan implies that they might be able to get by on their dividends.) Brynjolfsson and McAfee prefer the idea of a negative income tax; this would provide the unemployed with a minimal living and the underemployed with additional cash.
But, if it’s unrealistic to suppose that smart machines can be stopped, it’s probably just as unrealistic to imagine that smart policies will follow. Which brings us back to Trump. The other day, during his “victory lap” through the Midwest, the President-elect vowed to “usher in a new Industrial Revolution,” apparently unaware that such a revolution is already under way, and that this is precisely the problem. The pain of dislocation he spoke to during the campaign is genuine; the solutions he offers are not. How this will all end, no one can say with confidence, except, perhaps, for Watson. ♦