Fear of uncontrolled technologies is characteristic for the humanity due to the deep understanding of problems connected with uncontrolled power. This problem was examined many times in literature in different epochs in an effort to describe the future of the humanity in terms of the consequences of the technological growth. Isaac Asimov not only examines this problem in his works, but also tries to solve it and to create some rules that will limit it. In such manner, he created Three Laws of Robotics.
Three Laws of Robotics
Three Laws of Robotics are obligatory rules of behavior for the robots, for the first time formulated by Isaac Asimov in the story “Runaround” (1942).
- A robot can not hurt a human being or through its inaction, allow a human being to come to harm.
- A robot must obey people’s orders, except those orders which would be contrary to the First Law.
- A robot must protect its own existence as long as this protection does not contradict the First or Second Laws.
The cycle of Asimov’s stories about robots is devoted to these three Laws and possible causes and effects of their infringement. In some of them, unforeseen consequences of robots following Three Laws (for example “Mirror Image”) are considered.
In 1986, in the novel Robots and Empire Asimov has offered the Zero Law:
0. The Robot cannot cause harm to the person if only it does not prove that finally it will be useful for the whole mankind.
Ethical Substantiation of Laws
In the story “Evidence” (1946), Asimov states in details moral substantiations of Three Laws. One of heroes of the story, Dr. Susan Calvin puts following arguments:
- The person usually abstains from harm drawing to other person, except for cases of sharp compulsion (for example, in the war) or to rescue more people. It is equivalent to the First Law.
- Similarly, feeling a civil responsibility, the person follows instructions of authoritative people: doctors, teachers, chiefs and so on, which corresponds to the Second Law.
- Finally, each of us cares for the safety, so there is a Third Law.
The story is devoted a question whether it is possible to distinguish the person and the robot created to look like the person and outwardly indistinguishable from the person. Calvin asserts that if someone follows Laws, it is “either the robot, or very good person”. Concerning the question whether there is then a big difference between the robot and the person, she answers: “the huge difference. First of all, robots are deeply decent”.
Appendices of Three Laws out of a Fantasy
If Three Laws imply deep ethical principles, can they be applied not only to robots? Many authors answer this question affirmatively in the result of various examples.
In the essay Three Laws of Robotics, Asimov notices that activity of Three Laws can be extended to all tools created by a person:
- The tool should be safe for use, for example knifes should have handles.
- The tool should carry out the functions provided that it does not represent any danger for anyone.
- The tool should remain in integrity and safety during its use if only its destruction is not dictated by reasons of safety or if it does not enter into its function.
Infringement of all Three Laws
Three times Isaac Asimov presents in his works robots that could break three laws in a counterbalance to robots Daneel and Giskard who have strengthened them by adding the Zero Law. It is necessary to notice that for the infringement of all three Laws it is enough for a robot to break the First Law as other two are based on it.
- First such case is described in the story “First Law” when robot MA-2 (Emma) has refused to protect the person in favor of "daughter". However, this story can be considered as a kind of fable as it appears in the Asimov’s preface to the story in the collection Dreams of Robots.
- The second case is in the story “Cal”: the robot who should have been deprived of creating abilities wants to kill the owner.
- The third case is in the story “Sally” in which robots-cars are able to kill the person who constantly inflicted pain on them. This story, however, does not refer to other stories about positron robots and cannot be included in the cycle.
The story “The Robot which had Dreams” tells about robot LVX-1 (Elvex) who thanks to the personal “fractal geometry” and the positron brain can run into unconsciousness and have dreams. He dreams that “robots slave away that they are depressed by excessive works and deep grief that they have got tired of infinite work”. In their dreams, robots did not follow first two Laws, and the Third sounded so: “the Robot should protect itself”. Elvex adds that his dream contain a person who has said “Release my people”, meaning robots. Having said that, he appears to be the person that Susan Calvin has destroyed as well as Elvex as she has realized his danger.
Possibility of such Change
In his works, Isaac Asimov adhered to different views of the severity of Laws. In first stories about robots, Laws are simply successfully designed restrictions, something like the charter on security measures. In the subsequent stories, Laws are an integral part of a mathematical basis of the positron brain. Without this theoretical basis, i.e. the so-called “the Fundamental theory of standard schemes”, scientists in Asimov’s works could not create any efficient samples. Law can be traced, for instance in cases when robotics experiments on Laws. In the story “As the robot”, Susan Calvin recognizes that the change of Laws is an awful, but a technically possible invention. Later, in “The Caves of Steel”, Dr. Gerrigel says that it is basically impossible.
Characters in Asimov’s works often notice that Laws are not verbally written down offers in memories of the robot, but they are rather difficult mathematical formulas on which the entire consciousness of the robot is based. Laws are similar to human instincts, such as a self-preservation instinct. Hence, they plan a way of self-determination for robots. The robot “on nature call” serves people, submits to their orders, and does not think of the restrictions or possible independence as the latter would become an inconvenience.
Resolution of Conflicts
The most perfect models of robots usually follow Laws on an artful enough algorithm that allows avoiding some problems. In many stories, for example in “Runaround”, the positron brain, in which the potentials of possible actions and outcomes are compared, is going to break Laws as soon as possible rather than stay idle. For example, the First Law does not allow the robot to perform surgical operations as it is necessary to “harm” a person in this case. However, it is possible to find robots-surgeons in Asimov’s stories. A vivid example of that is “Bicentennial Man”. The matter is that the robot, if he is perfect enough, can weigh all alternatives and understand that he will cause much less harm than in case the operation is conducted by a human surgeon or it does not occur at all. In “Evidence”, Susan Calvin says that a robot could represent himself as a public prosecutor as he personally does not harm anybody personally, except the case when there is still jury that establishes guilt, a judge, and an executioner who has to carry out the sentence.
To conclude, Three Laws of Robotics was an attempt to solve the problem of the uncontrolled technological danger. Isaac Asimov created a lot of ways these laws could be broken. The task of his research was to create a complete ideal version of three laws and to solve the robotics problem entirely.
War of the Worlds
In 1877 the Italian astronomer Giovanni Virginio Schiaparelli has found out a network of rectilinear lines which he named channels on Mars. There was a hypothesis according to which these channels are artificial constructions. The similar point of view has been denied subsequently, but during lifetime of Schiaparelli used wide recognition.
And from here the thought about inhabitance this planet logically followed. Certainly, something also contradicted it. Mars is more senior the Earth will defend from the Sun and if the life on it has begun earlier already approaches the end further. The average day temperature in an equatorial belt not above, than at us in the coldest weather, atmosphere is very rarefied, at poles huge weights of ice accumulate. But whether follows from here, what during existence of Mars at them has grown incomparable with terrestrial the technician and at the same time aspiration to move on other planet more convenient for a life? First Wells is represented some kind of follower of Jules Verne, certain "the technical fantast".
Martians have brought new principles of a science and technics to the Earth. Their fighting tripods walking with speed of a bird, their thermal and light beams, their gas attacks, ability to use articulate, instead of wheel devices to which engineers of the future generations have come, are proclaimers of robotics.
Flying machines heavier than air only were only planned, but Wells’ \ Martians already build their own.
And everything that Martians, people have brought with themselves, Wells predicts, in due course will seize. Matter is not in one technics. Intrusion of Martians threatened not only England, but also all our planet.
And Wells comes up with favorite idea: "Perhaps, intrusion of Martians does not remain without advantage for people; it has taken away serene belief in the future which so easily conducts to decline it promoted idea of propagation about the uniform organization of mankind".
Fear concept in “War of the Worlds” is a fear on things that are out of understanding in contradistinction to Isaac Azimov’s “I, robot”. The main distinction is that technology danger in War of the Worlds is external, and humanity can do nothing with this type of danger at all. Moreover, problems with robotics, lightened in “I, robot” is a problems of uncontrolled force, but made by the human.