The 3 Laws of Robotics: What Are They?

What began as a science-fiction trope is now a guiding light for robotics regulation.

Written by Brooke Becher
Published on Jun. 25, 2024
Laws of Robotics
Image: Shutterstock

The three laws of robotics is a set of rules defined by science fiction writer Isaac Asimov that are designed to prevent robots from harming humans. The laws are:

The 3 Laws of Robotics

  1. A robot may not injure a human or, through inaction, allow a human to come to harm.
  2. A robot must obey orders given to it by a human, except when such orders would conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

While entirely fictional, the three laws of robotics are often credited as the basis for legitimate ethical frameworks drafted by AI and robotics researchers to assess a project’s potential impact on humanity.

 

Who Created the 3 Laws of Robotics?

Science-fiction writer Isaac Asimov introduced the three laws of robotics in his 1942 short story “Runaround.” As a counter to the popular Frankensteinian trope, in which a creation destroys its maker, Asimov portrayed friendly, sympathetic robots and human-like androids hardwired with his code of ethics.

Following its publication, the three laws of robotics became tenets of Asimov’s robot-based oeuvre, and permanently altered the sci-fi genre as a whole.

Asimov, who is widely credited with the invention of the term “robotics,” was considered one of the “Big Three” science fiction writers alongside Robert Heinlein, the father of “hard science” fiction, and Arthur Clarke, author of 2001: A Space Odyssey. 

 

What About the Zeroth Law?

The Zeroth Law states that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.” The dictum was introduced in Asimov’s 1985 novel Robots and Empire, and is meant to take precedence over the three laws.

The zeroth law enforces decision-making robots to prioritize humanity as a whole — over a single individual — in the interest of the greater good. This law not only overrides all other laws but allows a robot to override human commands if it is able to calculate long-term harm.

Related ReadingHere’s How AI Is Building a Robot-Filled World

 

Criticisms of the 3 Laws of Robotics

The lack of comprehensive standardization for robotics and AI, especially as they advance, has led some leaders in the space to make up rules as they go. And so Asimov’s theories have, almost by default, functioned as guiding principles.

“As the technologies from science fiction start to come true, people begin to look to science fiction for aid in both understanding and maybe even regulating them,” Peter Singer, a strategist for public policy think tank New America Foundation and author of “Isaac Asimov’s Laws of Robotics Are Wrong,” told Built In. Inevitably, this fiction-to-reality crossover has invited some criticism.

They’re Fictional

Yeah — totally made up. The three laws were never meant for real-world application; they serve as plot devices that drive Asimov’s stories. He wrote them with intentional ambiguity in order to plant loopholes that challenged its characters, as well as the audience, with ethical dilemmas. They’re deeply flawed, and — in every single one of Asimov’s stories — they fail.

They’re Written in English

The three laws are written in English, a natural language, which is inherently ambiguous and subject to multiple interpretations. This aspect makes it impractical to code the laws into precise, machine-readable instructions for real-world application. Without fully decoding the nuances of a natural language using identifiable boundary conditions, there’s not really a way to program a robot that exhibits consistent, safe behavior.

“A concept like ‘harm’ is really hard to program into a machine,” Gary Marcus, a cognitive psychologist, said at a World Science Festival panel. “It’s one thing to program in geometry or compound interest where we have precise, necessary and sufficient conditions — nobody has any idea how to, in a generalized way, get a machine to recognize something like ‘harm’ or ‘justice.’”

They’re Riddled with Moral Dilemmas

Before implementing the three laws into a machine, you’d almost have to first solve ethics as a whole. The laws assume a level of comprehension and judgment that current AI and robotics do not have and may never possess. In order to translate moral philosophy into a robot, you would have to encode ethical principles and decision-making processes with defined boundary conditions into algorithms built to handle complex real-world scenarios, all while continuously learning and adapting to new contexts and norms.

Misinterpreting the word “human” while coding a robot could, for example, lead to catastrophic results. If a robot were to somehow incorporate real-world ethnic cleansing campaigns into its programming, Singer said, then it may only recognize people of a certain group as human. “They would be following the laws,” he added, “but still be carrying out genocide.”

Robots Kill (And May Be Designed to)

Military robots have become essential to modern warfare. Blatantly defying Asimov’s first law, these machines use AI-driven software, algorithms and sensors to deliver targeted attacks from onboard weapons. In other words, they’re definitely doing harm.

Take for instance MAARS, a tiny, unmanned tank from Qinetiq that’s equipped with machine guns and both grenade and fire launchers, or Ghost Robotics’ rifle-backed robodogs that have been recruited by U.S. Marine Corps. According to the watchdog group Airwars, U.S. drones have killed at least 22,000 civilians, and as many as 48,000 civilians, since the 9/11 terrorist attacks.

Even when they’re not designed for combat, robots kill by accident. Malfunctions, though not intentional, have been racking up a death toll since the inception of workplace robots. The first recorded incident took place in 1979, when a Ford Motor Company employee climbed into a one-ton machine that restarted, instantly killing him on impact.

Even consumers are not immune to fatal mishaps involving robots. Over a 15-month span, the National Highway Traffic Safety Administration (NHTSA) counted 467 crashes and 14 deaths caused by Tesla's “insufficient” autopilot assisted-driving technology that occurred between 2022 and 2023.

Related ReadingTypes of Robots and How They’re Used

 

Are There Real-World Laws of Robotics?

Today, there are no universal laws of robotics that are enforceable in the real world. robot-related policy — which shares notable crossover with automation and AI — is a hodgepodge of recommendations, guidelines and ethical frameworks in progress that are localized to specific states, countries or regions.

However, that doesn’t mean there isn’t a call for more regulation.

“Regulation is necessary,” Emma Ruttkamp-Bloem, an AI ethics researcher for the Centre for Artificial Intelligence Research who also serves on the United Nations AI advisory body, told Built In. This is because AI-driven tech carries the potential to widen human inequality and transgress privacy rights, Ruttkamp-Bloem said, adding that it can also be used to erode human autonomy and manipulate end-users into decisions based on disinformation, further stoking social and political instability.

But “lists and lists of principles and values are useless,” she continued. “There should be a focus on actionable policy.”

Stateside, some federal regulations regarding automation and AI can be applied to robotic applications, given their overlap. For example, the Occupational Safety and Health Act (OSHA) helps protect employees working alongside industrial robots and in-field cobots, and the Federal Aviation Administration (FAA) oversees the use of drones, a class of robots.

More sweeping legislation is being passed abroad. China takes the lead in the global race for AI regulation, imposing laws that govern AI-generated algorithm recommendations, chatbots and deepfakes. And, with the AI Act coming into effect in 2026, the European Union is set to stake its claim in how robot and AI regulations are to be interpreted globally with “the world’s first comprehensive AI law.”

Below are some standout policies paving the way for universal robot-related regulations.

EPSRC Principles of Robotics (2011)

In a joint effort, the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council (EPSRC) of the United Kingdom published a set of five rules along with seven “high-level messages” to promote responsible practices in robotics research. Directed at designers, builders and robot users, these principles were designed to fast track the incorporation of robots as legal products into UK society. They determine that humans — not robots — are to be held liable as the responsible party, maintaining that robots are tools used to assist humans. A robot should be designed to protect itself, and must not be programmed to kill, harm or deceive.

“These principles are the closest we get to actual advice on taking action [now],” Andra Keay, a roboticist and managing director at Silicon Valley Robotics, wrote in an ARM blog post. “Actions will almost certainly differ from place to place … but we can still get started right away.”

The EU Artificial Intelligence Act (2024)

Taking full effect in 2026, the European Union’s Artificial Intelligence Act is the world’s first major law for regulating AI. It creates a framework to regulate AI systems by the potential risk they pose, divided in four categories: minimal, limited, high and unacceptable risk. Manipulative or exploitative projects, like governmental social scoring, would be outright banned, whereas those with lighter risks may face no intervention or only be required to label all AI-generated content on their platform. The act regulates AI system providers, and exempts products used for military, research and non-professional purposes.

Google’s Robot Constitution (2024)

Google’s team at DeepMind, an AI-focused research lab, created a set of guidelines as part of its Robot Constitution to ensure the safety and ethical operation of AI-powered robots. Inspired by Isaac Asimov’s Three Laws of Robotics, it includes safety-focused prompts that instruct robots to avoid tasks involving humans, animals, sharp objects and electrical appliances. This constitution is part of the AutoRT system, which uses computer vision and large language models to help robots gauge their environment in order to make safe, appropriate decisions. It also incorporates traditional safety measures, such as automatic force limits and a physical kill switch, to enhance human-robot interaction safety.

UNESCO’s Report on Robotics (2017)

In partnership with the World Commission on Ethics of Scientific Knowledge and Technology (COMEST), the United Nations Educational, Scientific and Cultural Organization (UNESCO) published its Report on Robotics, which includes a series of recommendations for human-robot relations. In it, the organizations suggest a code of ethics at every level — from conception, to fabrication and usage — as well as national and international conventions that are regularly updated at pace with advancing tech. These principles should prioritize values such as human dignity, autonomy and privacy, and include a regulatory framework of accountability, particularly for autonomous “cognitive” robotics, whether liability is tagged to the manufacturer or end user.

U.S.’s Blueprint for an AI Bill of Rights (2022)

Created by the White House’s Office of Science and Technology Policy (OSTP), the AI Bill of Rights, officially the Blueprint for an AI Bill of Rights, outlines five core principles to guide the responsible design and use of AI. Published in 2022, it was developed in collaboration with academics, human rights groups, the public and the leaders of major tech companies. The document outlines five principles aimed at making AI systems safer, less discriminatory and more transparent, addressing potential civil rights harms in areas such as hiring, education, healthcare and financial services. These principles aim to provide safe and effective AI-systems that avoid algorithmic discrimination and label automated content while protecting data privacy. It also states that users should be allowed to opt out of using AI-driven systems if a human-run alternative is preferred. Currently, it serves as a national framework to inform policy and practice, but is not legally binding.

Frequently Asked Questions

The three laws of robotics were introduced by science fiction writer Isaac Asimov. They are:

  1. A robot may not injure a human or, through inaction, allow a human to come to harm.
  2. A robot must obey orders given to it by humans except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Yes; the fourth law of robotics is referred to as the zeroth law. It was made to supersede the original three laws. It states, “a robot may not harm humanity, or, by inaction, allow humanity to come to harm," as a way to avoid unintentional and mass endangerment.

Sci-fi writer and biochemist Isaac Asimov originally introduced the three laws of robotics in his 1942 short story “Runaround.”

Sci-fi writer and biochemist Isaac Asimov originally introduced the three laws of robotics in his 1942 short story “Runaround.”

Explore Job Matches.