IN DEFENSE OF
ROBOTS
WILLIAM SHAKESPEARE,
Thou com’st in such a questionable shape
That I will speak to thee …
Hamlet, Act I, Scene 4
THE WORD “ROBOT,” first introduced by the Czech writer Karel Čapek, is derived from the Slavic root for “worker.” But it signifies a machine rather than a human worker. Robots, especially robots in space, have often received derogatory notices in the press. We read that a human being was necessary to make the terminal landing adjustments on Apollo 11, without which the first manned lunar landing would have ended in disaster; that a mobile robot on the Martian surface could never be as clever as astronauts in selecting samples to be returned to Earth-bound geologists; and that machines could never have repaired, as men did, the Skylab sunshade, so vital for the continuance of the Skylab mission.
But all these comparisons turn out, naturally enough, to have been written by humans. I wonder a small self-congratulatory element, a whiff of human chauvinism, has not crept into these judgments. Just as whites can sometimes detect racism and men can occasionally discern sexism, I wonder whether we cannot here glimpse some comparable affliction of the human spirit—a disease that as yet has no name. The word “anthropocentrism” does not mean quite the same thing. The word “humanism” has been pre-empted by other and more benign activities of our kind. From the analogy with sexism and racism I suppose the name for this malady is “speciesism”—the prejudice that there are no beings so fine, so capable, so reliable as human beings.
This is a prejudice because it is, at the very least, a prejudgment, a conclusion drawn before all the facts are in. Such comparisons of men and machines in space are comparisons of smart men and dumb machines. We have not asked what sorts of machines could have been built for the $30-or-so billion that the Apollo and Skylab missions cost.
Each human being is a superbly constructed, astonishingly compact, self-ambulatory computer—capable on occasion of independent decision making and real control of his or her environment. And, as the old joke goes, these computers can be constructed by unskilled labor. But there are serious limitations to employing human beings in certain environments. Without a great deal deal of protection, human beings would be inconvenienced on the ocean floor, the surface of Venus, the deep interior of Jupiter, or even on long space missions. Perhaps the only interesting results of Skylab that could not have been obtained by machines is that human beings in space for a period of months undergo a serious loss of bone calcium and phosphorus—which seems to imply that human beings may be incapacitated under 0 g for missions of six to nine months or longer. But the minimum interplanetary voyages have characteristic times of a year or two. Because we value human beings highly, we are reluctant to send them on very risky missions. If we do send human beings to exotic environments, we must also send along their food, their air, their water, amenities for entertainment and waste recycling, and companions. By comparison, machines require no elaborate life-support systems, no entertainment, no companionship, and we do not yet feel any strong ethical prohibitions against sending machines on one-way, or suicide, missions.
Certainly, for simple missions, machines have proved themselves many times over. Unmanned vehicles have performed the first photography of the whole Earth and of the far side of the Moon; the first landings on the Moon, Mars and Venus; and the first thorough orbital reconnaissance of another planet, in the Mariner 9 and Viking missions to Mars. Here on Earth it is increasingly common for high-technology manufacturing—for example, chemical and pharmaceutical plants—to be performed largely or entirely under computer control. In all these activities machines are able, to some extent, to sense errors, to correct mistakes, to alert human controllers some great distance away about perceived problems.
The powerful abilities of computing machines to do arithmetic—hundreds of millions of times faster than unaided human beings—are legendary. But what about really difficult matters? Can machines in any sense think through a new problem? Can they make discussions of the branched-contingency tree variety which we think of as characteristically human? (That is, I ask Question 1; if the answer is A, I ask Question 2; but if the answer is B, I ask Question 3; and so on.) Some decades ago the English mathematician A. M. Turing described what would be necessary for him to believe in machine intelligence. The condition was simply that he could be in teletype communication with a machine and be unable to tell that it was not a human being. Turing imagined a conversation between a man and a machine of the following quality:
INTERROGATOR: In the first line of your sonnet which reads “Shall I compare thee to a Summer’s day,” would not “a Spring day” do as well or better?
WITNESS: It wouldn’t scan.
INTERROGATOR: How about “a Winter’s day”? That would scan all right.
WITNESS: Yes, but nobody wants to be compared to a Winter’s day.
INTERROGATOR: Would you say Mr. Pickwick reminded you of Christmas?
WITNESS: In a way.
INTERROGATOR: Yet Christmas is a Winter’s day, and I do not think Mr. Pickwick would mind the comparison.
WITNESS: I don’t think you’re serious. By a Winter’s day one means a typical Winter’s day, rather than a special one like Christmas.
No device of this sophistication has yet been built, although I am not sure how many humans would pass Turing’s human test. But the amount of effort and money put into artificial intelligence has been quite limited, and there are only about a half-dozen major centers of such activity in the world. One of the more striking results obtained in a very limited universe of discourse—that of children’s blocks—has come from the work of Terry Winograd, then at the Massachusetts Institute of Technology. Here is a dialogue between man and machine, the machine having been programmed by Winograd:
Q: What does the box contain?
A: The blue pyramid and the blue box.
Q: How many blocks are not in the box?
A: Four of them.
Q: Is at least one of them narrower than the one which I told you to pick up?
A: Yes, the red cube.
Q: Is there a steeple?
A: Sorry, I don’t know the word “steeple.”
Q: A steeple is a stack which contains two green cubes and a pyramid, for example.
A: I understand.
Q: Are there any steeples now?
A: No.
Q: Build one.
The machine does. It reaches out its mechanical hand, moves the cubes and builds the structure that has just been described.
Q: When did you pick up the pyramid?
A: While I was stacking up the red cube, the large red block and the large green cube.
Q: Why?
A: To get rid of it.
Q: Why did you clear off that cube?
A: To put it on the large green cube.
As another example, a machine psychiatrist has been developed by Joseph Weizenbaum, also at MIT. This is a much simpler program than Winograd’s, since it involves nondirective psychotherapy, which, generally speaking, requires extremely simple-minded computer programs. Here is a striking dialogue between a human being and a computer psychotherapist (There is certainly a selection effect here. Many other such machine/human psychiatric dialogues are not nearly so striking. But I suppose most human/human psychiatric dialogues are not very striking either.) In this interaction, in order to spell out clearly the respective roles of therapist and patient, the computer therapist types upper case while the patient types lower case:
men are all alike
IN WHAT WAY
they’re always bugging us about something or other
CAN YOU THINK OF A SPECIFIC EXAMPLE
well, my boy friend made me come here
YOUR BOY FRIEND MADE YOU COME HERE
he noticed that i’m depressed much of the time
I AM SORRY TO HEAR YOU ARE DEPRESSED
he’s right. I am unhappy
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
i need some help, that much is clear
WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP
TELL ME MORE ABOUT YOUR FAMILY
my mother takes care of me
WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU
my father
YOUR FATHER
you are like my father in some ways
WHAT RESEMBLANCE DO YOU SEE
you are not very aggressive, but i don’t think you want me to notice that
WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE
you don’t argue with me
WHY DO YOU THINK I DON’T ARGUE WITH YOU
you are afraid of me
DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU
it pleases my father to think i am afraid of him
WHAT ELSE COMES TO MIND WHEN YOU THINK OF YOUR FATHER
bullies
DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR BOY FRIEND MADE YOU COME HERE
This astonishing—one is very tempted to say “perceptive”—response from the computer is, of course, preprogrammed. But, then, so are the responses of human psychotherapists. In a time when more and more people in our society seem to be in need of psychiatric counseling, and when time-sharing of computers is widespread, I can even imagine the development of a network of computer psychotherapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we are able to talk to an attentive, tested and largely nondirective psychotherapist. Ensuring the confidentiality of the psychiatric dialogue is one of several important steps still to be worked out.
ANOTHER SIGN of the intellectual accomplishments of machines is in games. Even exceptionally simple computers—those that can be wired by a bright ten-year-old—can be programmed to play perfect tic-tac-toe. Some computers can play world-class checkers. Chess is of course a much more complicated game than tic-tac-toe or checkers. Here programming a machine to win is more difficult, and novel strategies have been used, including several rather successful attempts to have a computer learn from its own experience in playing previous chess games. Computers can learn, for example, empirically the rule that it is better in the beginning game to control the center of the chessboard than the periphery. The ten best chess players in the world still have nothing to fear from any present computer. But the situation is changing. Recently a computer for the first time did well enough to enter the Minnesota State Chess Open. This may be the first time that a non-human has entered a major sporting event on the planet Earth (and I cannot help but wonder if robot golfers and designated hitters may be attempted sometime in the next decade, to say nothing of dolphins in free-style competition). The computer did not win the Chess Open, but this is the first time one has done well enough to enter such a competition. Chess-playing computers are improving extremely rapidly.
I have heard machines demeaned (often with a just audible sigh of relief) for the fact that chess is an area where human beings are still superior. This reminds me very much of the old joke in which a stranger remarks with wonder on the accomplishments of a checker-playing dog. The dog’s owner replies, “Oh, it’s not all that remarkable. He loses two games out of three.” A machine that plays chess in the middle range of human expertise is a very capable machine; even if there are thousands of better human chess players, there are millions who are worse. To play chess requires strategy, foresight, analytical powers, and the ability to cross-correlate large numbers of variables and to learn from experience. These are excellent qualities in those whose job it is to discover and explore, as well as those who watch the baby and walk the dog.
With this as a more or less representative set of examples of the state of development of machine intelligence, I think it is clear that a major effort over the next decade could produce much more sophisticated examples. This is also the opinion of most of the workers in machine intelligence.
In thinking about this next generation of machine intelligence, it is important to distinguish between self-controlled and remotely controlled robots. A self-controlled robot has its intelligence within it; a remotely controlled robot has its intelligence at some other place, and its successful operation depends upon close communication between its central computer and itself. There are, of course, intermediate cases where the machine may be partly self-activated and partly remotely controlled. It is this mix of remote and in situ control that seems to offer the highest efficiency for the near future.
For example, we can imagine a machine designed for the mining of the ocean floor. There are enormous quantities of manganese nodules littering the abyssal depths. They were once thought to have been produced by meteorite infall on Earth, but are now believed to be formed occasionally in vast manganese fountains produced by the internal tectonic activity of the Earth. Many other scarce and industrially valuable minerals are likewise to be found on the deep ocean bottom. We have the capability today to design devices that systematically swim over or crawl upon the ocean floor; that are able to perform spectrometric and other chemical examinations of the surface material; that can automatically radio back to ship or land all findings; and that can mark the locales of especially valuable deposits—for example, by low-frequency radio-homing devices. The radio beacon will then direct great mining machines to the appropriate locales. The present state of the art in deep-sea submersibles and in spacecraft environmental sensors is clearly compatible with the development of such devices. Similar remarks can be made for off-shore oil drilling, for coal and other subterranean mineral mining, and so on. The likely economic returns from such devices would pay not only for their development, but for the entire space program many times over.
When the machines are faced with particularly difficult situations, they can be programmed to recognize that the situations are beyond their abilities and to inquire of human operators—working in safe and pleasant environments—what to do next. The examples just given are of devices that are largely self-controlled. The reverse also is possible, and a great deal of very preliminary work along these lines has been performed in the remote handling of highly radioactive materials in laboratories of the U.S. Department of Energy. Here I imagine a human being who is connected by radio link with a mobile machine. The operator is in Manila, say; the machine in the Mindanao Deep. The operator is attached to an array of electronic relays, which transmits and amplifies his movements to the machine and which can, conversely, carry what the machine finds back to his senses. So when the operator turns his head to the left, the television cameras on the machine turn left, and the operator sees on a great hemispherical television screen around him the scene the machine’s searchlights and cameras have revealed. When the operator in Manila takes a few strides forward in his wired suit, the machine in the abyssal depths ambles a few feet forward. When the operator reaches out his hand, the mechanical arm of the machine likewise extends itself; and the precision of the man/machine interaction is such that precise manipulation of material at the ocean bottom by the machine’s fingers is possible. With such devices, human beings can enter environments otherwise closed to them forever.
In the exploration of Mars, unmanned vehicles have already soft-landed, and only a little further in the future they will roam about the surface of the Red Planet, as some now do on the Moon. We are not ready for a manned mission to Mars. Some of us are concerned about such missions because of the dangers of carrying terrestrial microbes to Mars, and Martian microbes, if they exist, to Earth, but also because of their enormous expense. The Viking landers deposited on Mars in the summer of 1976 have a very interesting array of sensors and scientific instruments, which are the extension of human senses to an alien environment.
The obvious post-Viking device for Martian exploration, one which takes advantage of the Viking technology, is a Viking Rover in which the equivalent of an entire Viking spacecraft, but with considerably improved science, is put on wheels or tractor treads and permitted to rove slowly over the Martian landscape. But now we come to a new problem, one that is never encountered in machine operation on the Earth’s surface. Although Mars is the second closest planet, it is so far from the Earth that the light travel time becomes significant. At a typical relative position of Mars and the Earth, the planet is 20 light-minutes away. Thus, if the spacecraft were confronted with a steep incline, it might send a message of inquiry back to Earth. Forty minutes later the response would arrive saying something like “For heaven’s sake, stand dead still.” But by then, of course, an unsophisticated machine would have tumbled into the gully. Consequently, any Martian Rover requires slope and roughness sensors. Fortunately, these are readily available and are even seen in some children’s toys. When confronted with a precipitous slope or large boulder, the spacecraft would either stop until receiving instructions from the Earth in response to its query (and televised picture of the terrain), or back off and start in another and safer direction.
Much more elaborate contingency decision networks can be built into the onboard computers of spacecraft of the 1980s. For more remote objectives, to be explored further in the future, we can imagine human controllers in orbit around the target planet, or on one of its moons. In the exploration of Jupiter, for example, I can imagine the operators on a small moon outside the fierce Jovian radiation belts, controlling with only a few seconds’ delay the responses of a spacecraft floating in the dense Jovian clouds.
Human beings on Earth can also be in such an interaction loop, if they are willing to spend some time on the enterprise. If every decision in Martian exploration must be fed through a human controller on Earth, the Rover can traverse only a few feet an hour. But the lifetimes of such Rovers are so long that a few feet an hour represents a perfectly respectable rate of progress. However, as we imagine expeditions into the farthest reaches of the solar system—and ultimately to the stars—it is clear that self-controlled machine intelligence will assume heavier burdens of responsibility.
In the development of such machines we find a kind of convergent evolution. Viking is, in a curious sense, like some great outsized, clumsily constructed insect. It is not yet ambulatory, and it is certainly incapable of self-reproduction. But it has an exoskeleton, it has a wide range of insectlike sensory organs, and it is about as intelligent as a dragonfly. But Viking has an advantage that insects do not: it can, on occasion, by inquiring of its controllers on Earth, assume the intelligence of a human being—the controllers are able to reprogram the Viking computer on the basis of decisions they make.
As the field of machine intelligence advances and as increasingly distant objects in the solar system become accessible to exploration, we will see the development of increasingly sophisticated onboard computers, slowly climbing the phylogenetic tree from insect intelligence to crocodile intelligence to squirrel intelligence and—in the not very remote future, I think—to dog intelligence. Any flight to the outer solar system must have a computer capable of determining whether it is working properly. There is no possibility of sending to the Earth for a repairman. The machine must be able to sense when it is sick and skillfully doctor its own illnesses. A computer is needed that is able either to fix or replace failed computer, sensor or structural components. Such a computer, which has been called STAR (self-testing and repairing computer), is on the threshold of development. It employs redundant components, as biology does—we have two lungs and two kidneys partly because each is protection against failure of the other. But a computer can be much more redundant than a human being, who has, for example, but one head and one heart.
Because of the weight premium on deep space exploratory ventures, there will be strong pressures for continued miniaturization of intelligent machines. It is clear that remarkable miniaturization has already occurred: vacuum tubes have been replaced by transistors, wired circuits by printed circuit boards, and entire computer systems by silicon-chip microcircuitry. Today a circuit that used to occupy much of a 1930 radio set can be printed on the tip of a pin. If intelligent machines for terrestrial mining and space exploratory applications are pursued, the time cannot be far off when household and other domestic robots will become commercially feasible. Unlike the classical anthropoid robots of science fiction, there is no reason for such machines to look any more human than a vacuum cleaner does. They will be specialized for their functions. But there are many common tasks, ranging from bartending to floor washing, that involve a very limited array of intellectual capabilities, albeit substantial stamina and patience. All-purpose ambulatory household robots, which perform domestic functions as well as a proper nineteenth-century English butler, are probably many decades off. But more specialized machines, each adapted to a specific household function, are probably already on the horizon.
It is possible to imagine many other civic tasks and essential functions of everyday life carried out by intelligent machines. By the early 1970s, garbage collectors in Anchorage, Alaska, and other cities won wage settlements guaranteeing them salaries of about $20,000 per annum. It is possible that the economic pressures alone may make a persuasive case for the development of automated garbage-collecting machines. For the development of domestic and civic robots to be a general civic good, the effective re-employment of those human beings displaced by the robots must, of course, be arranged; but over a human generation that should not be too difficult—particularly if there are enlightened educational reforms. Human beings enjoy learning.
We appear to be on the verge of developing a wide variety of intelligent machines capable of performing tasks too dangerous, too expensive, too onerous or too boring for human beings. The development of such machines is, in my mind, one of the few legitimate “spinoffs” of the space program. The efficient exploitation of energy in agriculture—upon which our survival as a species depends—may even be contingent on the development of such machines. The main obstacle seems to be a very human problem, the quiet feeling that comes stealthily and unbidden, and argues that there is something threatening or “inhuman” about machines performing certain tasks as well as or better than human beings; or a sense of loathing for creatures made of silicon and germanium rather than proteins and nucleic acids. But in many respects our survival as a species depends on our transcending such primitive chauvinisms. In part, our adjustment to intelligent machines is a matter of acclimatization. There are already cardiac pacemakers that can sense the beat of the human heart; only when there is the slightest hint of fibrillation does the pacemaker stimulate the heart. This is a mild but very useful sort of machine intelligence. I cannot imagine the wearer of this device resenting its intelligence. I think in a relatively short period of time there will be a very similar sort of acceptance for much more intelligent and sophisticated machines. There is nothing inhuman about an intelligent machine; it is indeed an expression of those superb intellectual capabilities that only human beings, of all the creatures on our planet, now possess.