The Three Laws of Robotics: What Are They?

Isaac Asimov’s three laws of robotics outline steps to prevent robots from harming humans and were originally created by Asimov to drive the plots of his fictional stories. Here are popular criticisms of the laws and their real-world impact.

Written by Brooke Becher
Laws of Robotics
Image: Shutterstock
UPDATED BY
Matthew Urwin | Nov 18, 2024

The Three Laws of Robotics

  1. A robot may not injure a human or, through inaction, allow a human to come to harm.
  2. A robot must obey orders given to it by a human, except when such orders would conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

The three laws of robotics are a set of rules defined by science fiction writer Isaac Asimov that are designed to prevent robots from harming humans. They serve as plot devices in Asimov’s stories, exposing how robots’ behavior can still be unpredictable when following these rules and lead to unexpected outcomes.

While entirely fictional, the three laws of robotics have shaped ethical discussions around artificial intelligence, contributed to roboticists’ real-world frameworks and even inspired safety policies at companies like Google.

 

Who Created the Three Laws of Robotics?

Science-fiction writer Isaac Asimov introduced the three laws of robotics in his 1942 short story “Runaround.” As a counter to the popular Frankensteinian trope, in which a creation destroys its maker, Asimov portrayed friendly, sympathetic robots and human-like androids hardwired with his code of ethics. At the same time, these laws drive plots where robots make mistakes despite following the laws, highlighting the uncertainties involved in real-world applications.

Following its publication, the three laws of robotics became tenets of Asimov’s robot-based oeuvre and permanently altered the sci-fi genre as a whole.

Asimov, who is widely credited with the invention of the term “robotics,” was considered one of the “Big Three” science fiction writers alongside Robert Heinlein, the father of “hard science” fiction, and Arthur Clarke, author of 2001: A Space Odyssey.

 

What About the Zeroth Law?

The Zeroth Law states that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.” The dictum was introduced in Asimov’s 1985 novel Robots and Empire, and is meant to take precedence over the three laws.

The zeroth law enforces decision-making robots to prioritize humanity as a whole — over a single individual — in the interest of the greater good. This law not only overrides all other laws but allows a robot to override human commands if it is able to calculate long-term harm. 

Related ReadingHere’s How AI Is Building a Robot-Filled World

 

Criticisms of the Three Laws of Robotics

The lack of comprehensive standardization for robotics and AI, especially as they advance, has led some leaders in the space to make up rules as they go. And so Asimov’s theories have, almost by default, functioned as guiding principles.

“As the technologies from science fiction start to come true, people begin to look to science fiction for aid in both understanding and maybe even regulating them,” Peter Singer, a strategist for public policy think tank New America Foundation and author of Isaac Asimov’s Laws of Robotics Are Wrong told Built In. Inevitably, this fiction-to-reality crossover has invited some criticism.

Based on Fiction

The three laws were never meant for real-world application; they serve as plot devices that drive Asimov’s stories. He wrote them with intentional ambiguity in order to plant loopholes that challenged its characters, as well as the audience, with ethical dilemmas. They’re deeply flawed, and — in every single one of Asimov’s stories — they fail.

Ambiguous in Nature

The three laws are written in English, a natural language, which is inherently ambiguous and subject to multiple interpretations. This aspect makes it impractical to code the laws into precise, machine-readable instructions. Without fully decoding the nuances of a natural language, there’s no way to program a robot that exhibits consistent, safe behavior.

“A concept like ‘harm’ is really hard to program into a machine,” Gary Marcus, a cognitive psychologist, said at a World Science Festival panel. “It’s one thing to program in geometry or compound interest where we have precise, necessary and sufficient conditions — nobody has any idea how to, in a generalized way, get a machine to recognize something like ‘harm’ or ‘justice.’”

Riddled with Moral Dilemmas

The laws assume a level of comprehension and judgment that current AI and robotics do not have and may never possess. To translate moral philosophy into a robot, you would have to encode ethical principles and decision-making processes with defined boundary conditions into algorithms built to handle complex real-world scenarios, all while continuously learning and adapting to new contexts and norms.

Misinterpreting the word “human” while coding a robot could, for example, lead to catastrophic results. If a robot were to somehow incorporate real-world ethnic cleansing campaigns into its programming, Singer said, then it may only recognize people of a certain group as human. “They would be following the laws,” he added, “but still be carrying out genocide.”

Inaccurate in Light of Today’s Robots

Military robots have become essential to modern warfare. Blatantly defying Asimov’s first law, these machines use AI-driven software, algorithms and sensors to deliver targeted attacks from onboard weapons. According to the watchdog group Airwars, U.S. drones have killed at least 22,000 civilians, and as many as 48,000 civilians, since the 9/11 terrorist attacks.

Even when they’re not designed for combat, robots kill by accident. The first recorded incident took place in 1979, when a Ford Motor Company employee climbed into a one-ton machine that restarted, instantly killing him on impact.

Even consumers are not immune to fatal mishaps involving robots. Over a 15-month span, the National Highway Traffic Safety Administration (NHTSA) counted 467 crashes and 14 deaths caused by Tesla's “insufficient” autopilot assisted-driving technology that occurred between 2022 and 2023.

Related ReadingTypes of Robots and How They’re Used

 

Are There Real-World Laws of Robotics?

Today, there are no universal laws of robotics that are enforceable in the real world. Robot-related policy — which shares notable crossover with automation and AI — is a hodgepodge of recommendations, guidelines and ethical frameworks in progress that are localized to specific states, countries or regions.

However, that doesn’t mean there isn’t a call for more regulation.

“Regulation is necessary,” Emma Ruttkamp-Bloem, an AI ethics researcher for the Centre for Artificial Intelligence Research who also serves on the United Nations AI Advisory Body, told Built In. This is because AI-driven tech carries the potential to widen human inequality and transgress privacy rights, Ruttkamp-Bloem said, adding that it can also be used to erode human autonomy and manipulate end-users into decisions based on disinformation, further stoking social and political instability.

Stateside, some federal regulations regarding automation and AI can be applied to robotic applications. For example, the Occupational Safety and Health Act (OSHA) helps protect employees working alongside industrial robots and in-field cobots, and the Federal Aviation Administration (FAA) oversees the use of drones, a class of robots.

More sweeping legislation is being passed abroad. China takes the lead in the global race for AI regulation, imposing laws that govern AI-generated algorithm recommendations, chatbots and deepfakes. And, with the AI Act coming into effect in 2026, the European Union is set to stake its claim in how robot and AI regulations are to be interpreted globally with “the world’s first comprehensive AI law.”

Below are some standout policies paving the way for universal robot-related regulations.

EPSRC Principles of Robotics (2011)

The Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council of the United Kingdom published a set of five rules along with seven “high-level messages” to promote responsible practices in robotics research. Directed at designers, builders and robot users, these principles were designed to fast-track the incorporation of robots as legal products into UK society. They determine that humans — not robots — are to be held liable as the responsible party, maintaining that robots are tools used to assist humans. A robot should be designed to protect itself, and must not be programmed to kill, harm or deceive.

“These principles are the closest we get to actual advice on taking action [now],” Andra Keay, a roboticist and managing director at Silicon Valley Robotics, wrote in an ARM blog post. “Actions will almost certainly differ from place to place … but we can still get started right away.”

The EU Artificial Intelligence Act (2024)

Taking full effect in 2026, the European Union’s Artificial Intelligence Act is the world’s first major law for regulating AI. It creates a framework to regulate AI systems by the potential risk they pose, divided in four categories: minimal, limited, high and unacceptable risk. Manipulative or exploitative projects, like governmental social scoring, would be outright banned, whereas those with lighter risks may face no intervention or only be required to label all AI-generated content on their platform. The act regulates AI system providers, and exempts products used for military, research and non-professional purposes.

Google’s Robot Constitution (2024)

Google’s team at DeepMind, an AI-focused research lab, created a set of guidelines as part of its Robot Constitution to ensure the safety and ethical operation of AI-powered robots. Inspired by Isaac Asimov’s Three Laws of Robotics, it includes safety-focused prompts that instruct robots to avoid tasks involving humans, animals, sharp objects and electrical appliances. This constitution is part of the AutoRT system, which uses computer vision and large language models to help robots gauge their environments to make safe, appropriate decisions. It also incorporates traditional safety measures, such as automatic force limits and a physical kill switch, to enhance human-robot interactions.

UNESCO’s Report on Robotics (2017)

In partnership with the World Commission on Ethics of Scientific Knowledge and Technology, the United Nations Educational, Scientific and Cultural Organization published its Report on Robotics, which includes a series of recommendations for human-robot relations. In it, the organizations suggest a code of ethics at every level — from conception to fabrication and usage — as well as national and international conventions that are regularly updated at pace with advancing tech. These principles should prioritize values such as human dignity, autonomy and privacy, and include a regulatory framework of accountability, particularly for autonomous “cognitive” robotics, whether liability is tagged to the manufacturer or end user.

U.S.’s Blueprint for an AI Bill of Rights (2022)

Created by the White House’s Office of Science and Technology Policy, the AI Bill of Rights, officially the Blueprint for an AI Bill of Rights, outlines five core principles to guide the responsible design and use of AI. Published in 2022, it was developed in collaboration with academics, human rights groups, the public and the leaders of major tech companies. The document outlines five principles aimed at making AI systems safer, less discriminatory and more transparent, addressing potential civil rights harms in areas such as hiring, education, healthcare and financial services. These principles aim to provide safe and effective AI-systems that avoid algorithmic discrimination and label automated content while protecting data privacy. It also states that users should be allowed to opt out of using AI-driven systems if a human-run alternative is preferred. Currently, it serves as a national framework to inform policy and practice, but is not legally binding.

Frequently Asked Questions

The three laws of robotics were introduced by science fiction writer Isaac Asimov. They are:

  1. A robot may not injure a human or, through inaction, allow a human to come to harm.
  2. A robot must obey orders given to it by humans except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Yes; the fourth law of robotics is referred to as the zeroth law. It was made to supersede the original three laws. It states, “a robot may not harm humanity, or, by inaction, allow humanity to come to harm,” as a way to avoid unintentional and mass endangerment.

Sci-fi writer and biochemist Isaac Asimov originally introduced the three laws of robotics in his 1942 short story “Runaround.”

Asimov’s three laws of robotics are fictional plot devices, so neither robots nor AI systems must follow these laws in real life. While the laws can be used to evaluate AI systems, they don’t always apply to artificial intelligence in the same way that they do to robots.

Explore Job Matches.