Intruder iarcraa-3 Read online




  Intruder

  ( Isaac Asimov’s Robot City: Robots and Aliens - 3 )

  Robert Thurston

  Robert Thurston

  Intruder

  Isaac Asimov's Robot City: Robots And Aliens And Aliens

  Book 3

  What Is A Human Being?

  Isaac Asimov

  It sounds like a simple question. Biologically, a human being is a member of the species Homo sapiens. If we agree that one particular organism (say, a male) is a human being, then any female with which he can breed is also a human being. And any males with whom any of these females can breed are also human beings. This instantly marks up billions of organisms on Earth as human beings.

  It may be that there are organisms that are too old to breed, or too young, or too imperfect in one way or another, but who resemble human beings more than they resemble any other species. They, too, are human beings.

  We thus end up with something over 5 billion human beings on Earth right now, and perhaps 60 billion who have lived on Earth since Homo sapiens evolved.

  That’s simple, isn’t it? From the biological standpoint, we are all human beings, whether we speak English, Turkish, or Japanese; whether we have pale skin or dark; red hair or black; blue eyes or brown; flat noses or beaky ones; and so on.

  That, however, is a biological definition, a sophisticated one. Now suppose that you are a member of a primitive tribe, homogeneous in appearance, language and culture, and you suddenly encounter someone who looks superficially like you but has red hair, where you’ve seen only black; fair skin where you’ve seen only dark; and, worst of all, who cannot understand “people language” but makes odd sounds, which he seems to understand, but which clearly make no sense whatever.

  Are these strangers human beings in the sense that you yourself are? I’m afraid the consensus would be that they are not. Nor is it entirely a matter of lack of sophistication. The ancient Greeks, who were certainly among the most sophisticated human beings who ever lived, divided all human beings into two groups: Greeks and barbarians.

  By barbarians, they didn’t mean people who were uncivilized or bestial. They recognized that some barbarians, like the Egyptians and Babylonians and Persians were highly cultivated. It was just that non-Greeks didn’t speak Greek; they made sounds that made no more sense (to a Greek encountering other languages for the first time) than a silly sound like “bar-bar-bar.”

  You might feel that Greeks may have made that division as a matter of convenience, but that they didn’t go so far as to think that barbarians weren’t human.

  Oh, didn’t they? Aristotle, one of the most sophisticated of all the ancient Greeks, was quite certain that barbarians were slave-material by nature, while Greeks were free men by nature. Clearly, he felt that there was something sub-human about barbarians.

  But they were ancients, however sophisticated they might have been. They had limited experience, knew only a small portion of the world. Nowadays, we have learned so much we don’t come to those foolish conclusions. We know that all human-like creatures are a single species.

  Yes? Was it so long ago that most White Americans were quite certain that African Blacks were not human in the sense that they themselves were; that the Blacks were inferior and that to enslave them and let them live on the outskirts of a White society was doing them a great favor? I wouldn’t be surprised if some Americans believe that right now.

  It was not so long ago that Germans maintained loudly that Slavs and Jews were sub-human, so that they were right to do their best to rid “true” human beings of such vermin. And I wouldn’t be in the least surprised if there were lots of people right now who harbor similar notions.

  Almost everyone thinks of other groups as “inferior,” although often they do not care to say so out loud. They tend to divide humanity into groups of which only a small part (a part which invariably includes themselves) are “true” human beings.

  The Bible, of course, teaches universality (at least, in places). Thus, consider one of my own favorite passages in the New Testament, the parable of the “good Samaritan” (Luke 10:25-37). Someone tells Jesus that one of the beliefs one must have if one is to go to Heaven is “to love…thy neighbour as thyself. “ Jesus says he is correct and the man asks, “And who is my neighbour?” (In other words, does he love only his friends and people he likes, or is he supposed to love all sorts of bums and rotters?)

  And here comes the parable of the good Samaritan. To put it briefly, a man needs help, and both a Priest and a Levite (professional do-gooders, who are highly esteemed by pious people) ignore the whole matter, but a Samaritan offers a great deal of help.

  Now we talk so much about a “good” Samaritan because of this parable, that we think of the Samaritans as all good and are not surprised at the help he offers. However, to the pious Jews of Jesus’ time, Samaritans were heretics, things of evil, objects of hatred-and here we have a despised Samaritan doing good when Priests and Levites do not.

  And then Jesus asks, “Which now of these three, thinkest thou, was neighbour unto him that fell among the thieves?” And the man is forced to say, “He that shewed mercy on him.”

  This is as much to say that all good people are neighbors even when they are the kind of beneath-contempt individuals as Samaritans are. And it follows, since all human beings have the capacity to be good, all people are neighbors and love should extend to all.

  St. Paul says in Galatians 3:28: “There is neither Jew nor Greek; there is neither bond nor free; there is neither male nor female: for ye are all one in Christ Jesus.”

  That is a flat statement of universality.

  I know that there are many pious people who know these passages and who nevertheless maintain racist views of one sort or another. Such is the desire to be part of a superior group that nothing can wipe out the tendency to picture others as inferior; to divide human beings into a)human, b)semihuman, and c)sub-human, being careful always to put yourself into the first class.

  And if we have such trouble in getting human beings to define what a human being is, imagine the problem a robot would have. How does a robot define a human being?

  In the old days, when I was first beginning to write my robot stories, John W. Campbell (my editor and mentor) challenged me on several occasions to write a story that hinged on the difficulty of defining a human being. I always backed off. I did not have to try writing such a story to know that it would be a particularly difficult one to write and that I couldn’t do it. At least, not then.

  In 1976, however, I finally tackled the job and wrote “The Bicentennial Man. “ It dealt essentially with a robot that became more and more human, without ever being accepted as a human being. He became physically like a human being, mentally like a human being, and yet he never crossed the line. Finally, he did, by crossing the last barrier. He made himself mortal, and as he was dying, he was finally accepted as a human being.

  It made a good story (winning both the Hugo and the Nebula) but it didn’t offer a practical way of distinguishing between robot and human being, because a robot couldn’t wait for years to see if a possible human being died and thus proved itself to be a human being.

  Suppose you are a robot and you have to decide whether something that looks like a human being is really a human being, and you have to do it reasonably quickly.

  If the only robots that exist are primitive, there is no problem. If an object looks like a human being but is made of metal, it is a robot. If it talks in a mechanical kind of voice. moves with awkward jerky motions, and so on and so on, it is a robot.

  But what if the robot looks, superficially, exactly like a human being (like my robot, Daneel Olivaw), How can you tell that he’s a robot? Well, in my later robot
novels, you can’t, really. Daneel Olivaw is a human being in all respects except that he’s a lot more intelligent than most human beings, a lot more ethical, a lot kinder and more decent, a lot more human. That makes for a good story, too, but it doesn’t help identify a robot in any practical sense. You can’t follow a robot around to see if it is better than a human being, for you then have to ask yourself-is he (she) a robot or just an unusually good human being?

  There’s this-

  A robot is bound by the Three Laws of Robotics, and a human being is not. That means, for instance, that if you are a human being and you punch someone you think may be a robot and he punches you back, then he is not a robot. If you yourself are a robot, then if you punch him and he punches you back, he may nevertheless be a robot, since he may know that you are a robot, and First Law does not prevent him from hitting you. (That was a key point in my early story, “Evidence.”) In that case, though, you must ask a human being to punch the suspected robot, and if he punches back he is no robot.

  However, it doesn’t work the other way around. If you are a human being and you hit a suspected robot, and he doesn ’ t hit you back, that doesn’t mean he is a robot. He may be a human being, but a coward. He may be a human being but an idealist, who believes in turning the other cheek.

  In fact, if you are a human being and you punch a suspected robot and he punches you back, then he may still be a robot, nevertheless.

  After all, the First Law says,…A robot may not harm a human being or, through inaction, allow a human being to come to harm.” That, however, begs the question, for it assumes that a robot knows what a human being is in the first place.

  Suppose a robot is manufactured to be no better than a human being. Human beings often suppose other people are inferior, and not fully human, if they simply don’t speak your language, or speak it with an odd accent. (That’s the whole point of George Bernard Shaw’s Pygmalion.) In that case, it should be simple to build a robot within whom the definition of a human being includes the speaking of a specific language with a specific accent. Any failure in that respect makes a person the robot must deal with not a human being and the robot can harm or even kill him without breaking the First Law.

  In fact, I have a robot in my book Robots and Empire for which a human being is defined as speaking with a Solarian accent, and my hero is in danger of death for that very reason.

  So you see it is not easy to differentiate between a robot and a human being.

  We can make the matter even more difficult, if we suppose a world of robots that have never seen human beings. (This would be like our early unsophisticated human beings who have never met anyone outside their own tribe.) They might still have the First Law and might still know that they must not harm a human being-but what is this human being they must not harm?

  They might well think that a human being is superior to a robot in some ways, since that would be one reason why he must not be harmed. You ought not to offer violence to someone worthier than yourself.

  On the other hand, if someone were superior to you, wouldn’t it be sensible to suppose that you couldn ’ t harm him? If you could, wouldn’t that make him inferior to you? The fallacy there ought to be obvious. A robot is certainly superior to an unthinking rock, yet a falling rock might easily harm or even destroy a robot. Therefore the inferior can harm the superior, but in a well-run Universe it should not do so.

  In that case, a robot beginning only with the Laws of Robotics might well conclude that human beings were superior to robots.

  But then, suppose that in this world of robots, one robot is superior to all the rest. Is it possible, in that case, that this superior robot, who has never seen a human being, might conclude that he himself is a human being?

  If he can persuade the other robots that this is so then the Laws of Robotics will govern their behavior toward him and he may well establish a despotism over them. But will it differ from a human despotism in any way? Will this robot-human still be governed and limited by the Three Laws in certain respects, or will it be totally free of them?

  In that case, if it has the appearance and mentality and behavior of a human being, and if it lacks the Three Laws, in what way is it not a human being? Has it not become a human being in actuality?

  And what happens if, then, real human beings appear on the scene? Do the Three Laws suddenly begin to function again in the robot-human, or does he persist in considering himself human? In my very first published robot story, “Reason,” come to think of it, I described a robot that considered himself to be superior to human beings and could not be argued out of it.

  So what with one thing or another, the problem of defining a human being is enormously complex, and while in my various stories I’ve dealt with different aspects of it, I am glad to leave the further consideration of that problem to Robert Thurston in this third book of the Robots and Aliens series.

  Chapter 1. Robot City Dreams

  Derec knew he was dreaming. The street he now ambled down wasn’t real. There had never been a street anywhere in Robot City like this distorted thoroughfare. Still, too much was familiar about it, and that really scared him.

  The Compass Tower, now too far in the distance, had changed, too. There seemed to be lumps allover its surfaces, but that was impossible. In a city where buildings could appear and disappear overnight, the Compass Tower was the only permanent, unchangeable structure.

  It was possible this strange street was newly created, but he doubted that. It was a dream-street, plain and simple, and this had to be a dream. Anyway, where were the robots? Nobody could travel this far along a Robot City street without encountering at least a utility robot scurrying along, on its way to some regular task; or a courier robot, its claws clutching tools; or a witness robot, checking the movements of the humans. During a stroll like this, Derec should have encountered a robot every few steps.

  No, it was absolutely certain this was a dream. What he was doing was sleeping in his ship somewhere in space between the blackbody planet and Robot City. He had just come off duty after dealing with the Silversides for hours, a task that would tire a saint.

  At one time, just after his father had injected chemfets into his bloodstream, he had regularly dreamed of Robot City, but it turned out that his harrowing nightmares had all been induced by a monitor that his father had implanted in his brain. The monitor had been trying to establish contact so he could be aware of the nature of the chemfets, which were tiny circuit boards that grew in much the same manner as the city itself had. Replicating in his bloodstream and programmed by his father, they were a tiny robot city in his body, one that gave him psycho-electronic control over the city’s core computer and therefore all its robots. After he had known this and the chemfets’ replication process had stabilized, he had had no more nightmares of a distorted Robot City.

  Until now.

  Since he was so aware he was dreaming, perhaps this was what Ariel had explained to him as a “lucid dream.” In the lucid dream state, she said, the dreamer could control the events of the dream. He wanted to control this dream, but at the moment he couldn’t think of anything particular to do.

  He looked around him. The immediate streetscape seemed composed of bits and pieces from several stages of the city’s development, a weird composite of what Derec had observed during his several stays there.

  But where were the robots?

  If this was a lucid dream, maybe the reason he hadn’t seen any yet was that he hadn’t guided any into the scene. Maybe they were waiting inside the buildings to be summoned. Maybe he should do so, before he panicked. But which one could he bring onstage? How about Lucius, the robot who had created the city’s one authentic artistic masterpiece, the breathtaking tetragonal, pyramidal building-sculpture entitled “Circuit Breaker”? He’d be a good choice since, as the victim of a bizarre roboticide, he no longer existed. It certainly would be pleasant to see old Lucius again, his body so unrobotically stooped, if only to chat with him
about art. There hadn’t been much art in his life lately, especially if you didn’t count the rather breathtaking spectacle of a thousand blackbodies spread across the sky. That was pretty, but it wasn’t art.

  He wondered why his thoughts were rambling so. Had the Silversides disturbed his mind’s equilibrium that much? Forget them. Forget them now. Get a normal robot into the dream. One of the most unforgettable robots he had known. Avernus, say. Let’s see his stern visage again, his jet-black metallic skin, his interchangeable hands. He concentrated on Avernus, but the robot didn’t appear. How about Euler and his glowing photocell eyes? Nope, no deal. Let’s try for Wohler, then, before he went nonfunctional trying to save Ariel on the outer wall of the Compass Tower. Golden and impressive, Wohler would be a wonderful choice. But no Wohler responded to his summons. He would have to talk to Ariel about this. As a lucid dream, it was shaping up as one hell of a failure.

  Ariel, in her compartment aboard the ship, was also dreaming. Hers was not, however, a lucid dream. Deeper than that, it was a clearcut nightmare.

  Jacob Winterson, the humaniform robot who had been her servant, existed again. Jacob had been destroyed by Neuronius, one of the flying aliens called blackbodies. He had blown up and mangled most of Jacob (and himself in the bargain). The few charred pieces that remained were now buried in some unmarked area of the agricultural community she had initiated as a political compromise with the blackbodies. The compromise had worked. They had been about to destroy their planet’s new robot city entirely because it was a threat to their weather systems; however, an agricultural community was acceptable to all sides.

  She missed Jacob. Very much. In that comfortable, detached way a human could love a robot, she had loved him. Not that it could ever have been real love. She was too much in love with Derec to be unfaithful to him except in dreams. On the other hand, she could not deny that she had not sometimes been romantically attracted toward the handsome and imperturbable humaniform robot.