AI’s Prediction Problem
Steve G. Hoffman
04 July, 2017
Artificial Intelligence is finding hype again. Big money has arrived from Google, Elon Musk, and the Chinese government. Global cities like Berlin, Singapore and Toronto jockey to become development hubs for application-based machine intelligence. AlphaGo’s victories over world class Go players make splashy headlines far beyond the pages of IEEE Transactions. Yet in the shadows of the feeding frenzy, a familiar specter haunts. Bill Gates and Stephen Hawking echo the worries of doomsayer futurists by fretting over the rise of superintelligent machines that might see humanity as obsolete impediments to their algorithmic optimization.
There is a familiar formula to all this. AI has long struggled with a prediction problem, careening between promises of automating human drudgery and warnings of Promethean punishment for playing the gods. Humans have been imagining, and fearing, their thinking things for a very long time. Hephaestus built humans in his metal workshop with the help of golden assistants. Early modern era art and science are filled with brazen heads, automated musicians, and an infamous defecating duck.  The term “robot” came into popular use in the midst of European industrialization thanks to Karel Čapek’s play, Rossum’s Universal Robots, which chronicled the organized rebellion of mass produced factory slaves. Robot, not coincidently, is derived from the Old Church Slavonic “rabota,” which means “servitude.” Overall, then, we find thinking machines in myth and artifact built to glorify gods, to explain the mystery of life, to amuse, to serve, and to punish. They were, and are, artifacts that test the limits of technical possibility but, more importantly, provide interstitial arenas wherein social and political elites work through morality, ethics, and the modalities of hierarchical domination.
Contemporary AI was launched with a gathering of mathematicians, computer engineers, and proto-cognitive scientists at the Dartmouth Summer Workshop of 1956. The workshop proposal named the field and established an expectation that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” The work that followed in the wake of this workshop institutionalized a tendency toward overconfident prediction. In 1966, workshop alum and co-founder of the MIT AI Lab, Marvin Minsky, received a summer grant to hire a first-year undergraduate student, Gerald Sussman, to solve robot vision. Sussman didn’t make the deadline. Vision turned out to be one of the most difficult challenges in AI over the next four decades. The vision expert Berthold Horn has summarized, “You’ll notice that Sussman never worked in vision again." 
Expectations bring blessing and curse. Horn is among the now senior figures in AI who believe that predictions were and are a mistake for the field. He once pleaded with a colleague to stop telling reporters that robots would be cleaning their house within 5 years. “You’re underestimating the time it will take,” Horn reasoned. His colleague shot back, “I don’t care. Notice that all the dates I’ve chosen were after my retirement!” 
Researchers at the Future of Humanity Institute at Oxford have recently stitched together a database of over 250 AI predictions offered by experts and non-experts between 1950 and 2012. Their main results yield little confidence in the forecasting abilities of their colleagues.