

If history had gone Marvin Minsky’s way, he might have spent his last days in the care of medical robots — competent, sympathetic, and infinitely smart.
Minsky, the renowned computer scientist who passed away on Sunday, predicted in 1967 that in another generation, say by around the year 2000, he and his fellow computer scientists would have taught machines to think, just like people.
That hasn’t quite worked out. Computers still can’t think, and nearly every researcher says true artificial intelligence (AI) is still decades off. But thanks to the efforts of Minsky, founder of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, engineers have learned to build machines that pretend to think. And sometimes a make-believe mind is all the mind you need.
In 1951, as a graduate student at Princeton University, Minsky assembled a crude computer that simulated the behavior of rats trapped in a maze. Other early computers could only follow the exact commands of their human programmers. But the virtual rats in Minsky’s machine could find the exit themselves, becoming a little smarter with every attempt until they got it right.
Minsky had built a neural network computer, a kind of machine “where you don’t have to program in how to be smart,’’ said one of his MIT colleagues, Max Tegmark. “Rather, they learn like a child.’’
Today, neural networks are everywhere. Google Photos, for instance, uses them to study billions of stored images, so it can recognize specific objects. When I log onto my personal photo collection and type in the word “violin,’’ up come images from a rehearsal of the Boston Symphony Orchestra.
Neural networks also enable the real-time language translation feature offered by Skype, Microsoft Corp.’s Internet phone service. When somebody in Nigeria tries to use your credit card number, it’s a neural network that figures out something’s wrong and blocks the sale. And neural networks are why your smartphone recognizes your voice and answers your questions.
But before you get too impressed, Harvard University computer scientist Stuart Shieber suggests a little experiment. Fire up Apple’s Siri or the Google Now feature of Android and ask it how many trunks an elephant has. Or, which is farther away — Ogden, Utah or the moon? Easy questions for me, you, or most fourth-graders. But your smartphone hasn’t got a clue. They’ll lamely offer you some Wikipedia articles that might answer the questions, but that’s the best they can do. In short, computer scientists can make computers learn but are still clueless about how to make them think.
This is mainly because we don’t know what thought is, or how humans do it.
“Minsky said AI is going to be inspired by what we know and understand about the human brain,’’ said David Gelernter, professor of computer science at Yale University. But Minsky thought we’d have the brain figured out by now.
For Stanley Kubrick’s 1968 film “2001: A Space Odyssey,’’ Minsky helped dream up HAL 9000, a mass-murdering supercomputer. All these years later, a machine such as IBM Corp.’s Watson can outwit the smartest humans in “Jeopardy!’’ but is not capable of thinking about killing someone.
Minsky and his followers “understood reason and rationality, but they did not understand emotion,’’ said Gelernter. We think differently when we’re bored or angry or sad. Computers can’t emulate these states of mind.
Gelernter stopped writing software 20 years ago, to concentrate on brain science, the subject of his upcoming book, “The Tides of Mind.’’ The book grew out of his efforts to build a computer capable of daydreaming. “Only that kind of program is capable of creativity,’’ he said. “A mind without creativity, a mind without emotion, doesn’t deserve to be called a mind.’’
Some of these soulless smart machines are getting scary. For instance, the Israelis make a drone aircraft that can be programmed to loiter over a battlefield and automatically bomb enemy radars. The only human involved is the poor guy on the receiving end.
Last July, thousands of scientists and engineers signed a petition urging a global ban on such artificially intelligent weapons. One signer, electric car entrepreneur Elon Musk, has also donated $10 million for research on AI systems that will play nicely with humans. None of this “Terminator 2’’ stuff; Musk wants smart machines to be more like C-3PO from “Star Wars’’ — amusing, deferential, and absolutely incapable of impersonating a deity.
But for now, the dangers posed by thinking machines are as theoretical as the machines themselves. Meanwhile, simpler AI systems inhabit the world’s computer networks, making millions of us safer, wealthier, perhaps even a little happier. Marvin Minsky had hoped to achieve a great deal more. But given the remarkable smarts of today’s non-thinking machines, it’s hard to be disappointed.
Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him on Twitter @GlobeTechLab.