Explanation of the 3 Laws of Robotics
- May 02, 2016
The 3 Robotics laws are a set of laws conceived by the celebrated science fiction author Isaac Asimov. These 3 laws are also known as Asimov’s laws. The rules are first implemented in a short story called “Runaround”. These 3 laws do have some major flaws but we have to remember that these 3 laws are mainly made from the point of view of fiction showing the readers the ways through which a robot can communicate with humans. These 3 laws of robotics laid the foundation of robotics science and hence it still holds much importance. You can consider them similar to Dalton’s atomic theory which also has major flaws but still holds value in Chemistry. Today we’ll go through the 3 laws in details and will also highlight the flaws that it contains. Before beginning we should also like to add that Asimov added a 0th law as well to the other 3. We’ll also include that law in our discussions.
3 Laws of Robotics and the 0th Law
Asimov‘s 3 laws state that:
- “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
- “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”
- “A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
Asimov modified these 3 laws slightly in various stories as per convenience to further develop interactions between robots and humans. Asimov also added a 0th law or 4th law. This law states that:
“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
The 3 laws can be explained shortly in 3 sentences which should be enough to bring out the clear meaning of the underlying principles. You may not agree with the meaning that we are showing up here for your reference because the laws do proclaim certain conditions that are deemed faulty by many professionals. We’ll discuss the faults later but now we are showing the positives that these laws mean to implement. Let’s go through the meaning of the 3 laws respectively in the same order as the way it’s depicted above.
- Do not harm human beings.
- Obey orders.
- Protect themselves.
Asimov clearly stated in first law that robots should not harm humans or allow humans to come to harm presumably from an external source. In the second law, it’s stated that robots should obey all human beings if and only if they are not ordered to harm any human. The 3rd clearly is based on robot’s safety. The laws look very neat which is good but let’s look at them from different angles.
Let’s take a look at the flaws with a simple example for easy understanding of the context.
Say for example you have a hidden camera placed somewhere in your workplace you gave it an order to turn itself on and monitor the incidents occurring at a particular time. We are not getting into the part as to how you gave the order because there are various ways to do that based on human-machine interactions through voice, programs etc. The main thing is you gave it an order which it should perform. It’s not an order which is going to harm a human being even if your intentions are different. Say, you want to spy on the president which is harmful from the point of view of the government. But the machine will not understand it. The order’s simple enough for them which is “Monitor”. This simple word is not harmful. The machine will work and will obey your order. How do you explain this? The machine is obeying the order which is harmful for human beings. Therefore this is a flaw for sure. Now look at this from a different perspective.
You have installed a security camera with good intention to monitor the happenings of your workplace. If someone orders the camera to switch itself off, then it should switch itself off according to the laws. Will it take into account the intention of the person who has given the order to the camera to switch itself off? The order’s simply “Switch off’. This is harmless enough. The machine will obey the order even though the person has a clear bad intention to switch the machine off. Hence it can be deemed that the laws are flawed. This is simple because the machines do not what does ‘harm’ actually means. The degree of “harm” is unclear to them.
Another point is the laws are written in plain English which looks good from the hindsight. But it’s almost impossible to program them into a machine.
Similarly, there are countless examples that depict the fact that the Asimov laws are flawed. Hope, you got the point. The science fiction stories that are written by Asimov are undoubtedly pretty interesting. Seemingly, these laws are suitable only in those science-fiction stories where the plot actually goes in a manner that supports the 3 laws. But in reality it’s much different. It’s almost impossible to implement these laws in real life robotics.
Futurite aims to bring high quality STEM (Science, Technology, Engineering and Mathematics) education to school students in India. Our mission is to use cutting-edge technologies such as Robotics and 3D Printing to impart a thorough grounding in STEM subjects and concepts to school students while making their learning experience fun, engaging and hands-on. Futurite has been operating in Dubai successfully (as Premier Genie) since a number of years, working with over 25 of the top international schools and teaching 1000+ students. Futurite has been founded by alumni from IIT, IIM, JU and HEC Paris.
Latest posts by FUTURITE (see all)
- The squishy octobot marks the foundation of a brand new era of soft robotics - Aug 29, 2016
- Introducing CHiP, the world’s first adorable robot dog - Aug 23, 2016
- An Overview of Radio-Controlled (RC) Aircraft - Aug 15, 2016
- Introducing Aeromodelling for beginners - Aug 09, 2016
- A peek at microbots that are controlled remotely for medical operations - Aug 02, 2016