The Three Laws of Robotics, created by science fiction writer Isaac Asimov, are rules designed to govern the behavior of robots and prevent them from harming humans. They serve as plot devices for how robots in Asimov’s stories ideally should behave, exposing how robots’ actions can still be unpredictable when following these rules and lead to unexpected outcomes.
While entirely fictional, the Three Laws of Robotics have been adopted into real-life ethical discussions about artificial intelligence, inspiring the direction of real roboticist frameworks and even AI safety policies at companies like Google.
The Three Laws of Robotics
- A robot may not injure a human or, through inaction, allow a human to come to harm.
- A robot must obey orders given to it by a human, except when such orders would conflict with the First Law.
- A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
What Are the Three Laws of Robotics?
Isaac Asimov’s Three Laws of Robotics, introduced in the 1942 short story “Runaround,” are as follows:
- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
What Is the Zeroth Law of Robotics?
The Zeroth Law of Robotics states that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.” It was introduced in Asimov’s 1985 novel Robots and Empire, and is meant to take precedence over the Three Laws of Robotics.
The Zeroth Law enforces decision-making robots to prioritize humanity as a whole — over a single individual — in the interest of the greater good. This law not only overrides all other laws, but also allows a robot to override human commands if it is able to calculate long-term harm.
Who Created the Three Laws of Robotics?
Science fiction writer Isaac Asimov created the Three Laws of Robotics, and introduced the concept in his 1942 short story “Runaround.” Following its publication, the Three Laws of Robotics became tenets of Asimov’s robot-based oeuvre and permanently altered the sci-fi genre as a whole.
Asimov, who is also widely credited with the invention of the term “robotics,” is considered one of the “Big Three” science fiction writers alongside Robert Heinlein (the father of “hard science” fiction) and Arthur Clarke (author of 2001: A Space Odyssey).
Why Did Isaac Asimov Create the Three Laws of Robotics?
Isaac Asimov largely created and wrote the Three Laws of Robotics as a counter to the popular “Frankensteinian” trope, or “Frankenstein complex,” which expresses that an artificial creation of life will only seek to destroy its maker. Instead, Asimov applied the Three Laws to portray friendly, sympathetic robots and human-like androids hardwired with this particular code of ethics, emphasizing their utility and companionship to humans in contrast to creations following the Frankenstein complex.
At the same time, the Three Laws drive plots in Asimov’s stories where robots make mistakes despite following these laws, highlighting the unpredictability of these machines and their behaviors when used for real-world applications.
Do Real Robots Follow the Three Laws of Robotics?
While Asimov’s Three Laws of Robotics remain a staple of sci-fi — and have been used as inspiration for real-life AI frameworks — they are not widely encoded into real-world robots. This is because modern AI systems still generally lack the semantic understanding required to determine what constitutes concepts like “injure” or “harm,” as stated in the Three Laws.
Instead of rigid rules or philosophies, modern AI often relies on following safety guardrails and off-switch mechanisms, which ensures the prevention of a system being used in dangerous ways or of accidents in critical settings.
Criticisms of the Three Laws of Robotics
The lack of com prehensive standardization for robotics and AI, especially as they advance, has led some leaders in the space to make up rules as they go. And so Asimov’s theories have, almost by default, functioned as guiding principles.
“As the technologies from science fiction start to come true, people begin to look to science fiction for aid in both understanding and maybe even regulating them,” Peter Singer, a strategist for public policy think tank New America Foundation and author of Isaac Asimov’s Laws of Robotics Are Wrong told Built In. Inevitably, this fiction-to-reality crossover has invited some criticism.
Based on Fiction
The Three Laws were never meant for real-world application; they serve as plot devices that drive Asimov’s stories. He wrote them with intentional ambiguity in order to plant loopholes that challenged its characters, as well as the audience, with ethical dilemmas. They’re deeply flawed, and — in every single one of Asimov’s stories — they fail.
Ambiguous in Nature
The Three Laws are written in English, a natural language, which is inherently ambiguous and subject to multiple interpretations. This aspect makes it impractical to code the laws into precise, machine-readable instructions. Without fully decoding the nuances of a natural language, there’s no way to program a robot that exhibits consistent, safe behavior.
“A concept like ‘harm’ is really hard to program into a machine,” Gary Marcus, a cognitive psychologist, said at a World Science Festival panel. “It’s one thing to program in geometry or compound interest where we have precise, necessary and sufficient conditions — nobody has any idea how to, in a generalized way, get a machine to recognize something like ‘harm’ or ‘justice.’”
Riddled with Moral Dilemmas
The laws assume a level of comprehension and judgment that current AI and robotics do not have and may never possess. To translate moral philosophy into a robot, you would have to encode ethical principles and decision-making processes with defined boundary conditions into algorithms built to handle complex real-world scenarios, all while continuously learning and adapting to new contexts and norms.
Misinterpreting the word “human” while coding a robot could, for example, lead to catastrophic results. If a robot were to somehow incorporate real-world ethnic cleansing campaigns into its programming, Singer said, then it may only recognize people of a certain group as human. “They would be following the laws,” he added, “but still be carrying out genocide.”
Inaccurate in Light of Today’s Robots
Military robots have become essential to modern warfare. Blatantly defying Asimov’s first law, these machines use AI-driven software, algorithms and sensors to deliver targeted attacks from onboard weapons. Sharp criticism has risen from the use of AI-powered weaponry, though the development of such systems are likely to only continue.
Even when they’re not designed for combat, robots kill by accident. The first recorded incident took place in 1979, when a Ford Motor Company employee climbed into a one-ton machine that restarted, instantly killing him on impact.
Plus, consumers are not immune to fatal mishaps involving robots either. For instance, in 2025 alone, the U.S. National Highway Traffic Safety Administration (NHTSA) counted over 1,000 crashes caused by automated driving system (ADS)-enabled vehicles.
Are There Real-World Laws of Robotics?
Today, there are no universal laws of robotics that are enforceable in the real world. Robot-related policy — which shares notable crossover with automation and AI — is a hodgepodge of recommendations, guidelines and ethical frameworks in progress that are localized to specific states, countries or regions.
However, that doesn’t mean there isn’t a call for more regulation.
“Regulation is necessary,” Emma Ruttkamp-Bloem, an AI ethics researcher for the Centre for Artificial Intelligence Research who also serves on the United Nations AI Advisory Body, told Built In. This is because AI-driven tech carries the potential to widen human inequality and transgress privacy rights, Ruttkamp-Bloem said, adding that it can also be used to erode human autonomy and manipulate end-users into decisions based on disinformation, further stoking social and political instability.
In the United States, some federal regulations regarding automation and AI can be applied to robotic applications. For example, the Occupational Safety and Health Act (OSHA) helps protect employees working alongside industrial robots and in-field cobots, and the Federal Aviation Administration (FAA) oversees the use of drones, a class of robots.
Sweeping AI legislation is being passed abroad, too. For example, China is seeking to be an AI industry leader, but with specific regulations in mind. Additionally, the European Union’s AI Act, noted as the “first comprehensive regulation on AI by a major regulator anywhere,” clearly states systems that create an unacceptable risk are banned and that high-risk AI-powered applications are subject to specific legal requirements.
Below are some standout policies paving the way for universal robot-related regulations.
EPSRC Principles of Robotics (2011)
The Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council of the United Kingdom published a set of five rules along with seven “high-level messages” to promote responsible practices in robotics research. Directed at designers, builders and robot users, these principles were designed to fast-track the incorporation of robots as legal products into UK society. They determine that humans — not robots — are to be held liable as the responsible party, maintaining that robots are tools used to assist humans. A robot should be designed to protect itself, and must not be programmed to kill, harm or deceive.
“These principles are the closest we get to actual advice on taking action [now],” Andra Keay, a roboticist and managing director at Silicon Valley Robotics, wrote in an ARM blog post. “Actions will almost certainly differ from place to place … but we can still get started right away.”
The EU Artificial Intelligence Act (2024)
The European Union’s Artificial Intelligence Act is the world’s first major law for regulating AI. It creates a framework to regulate AI systems by the potential risk they pose, divided in four categories: minimal, limited, high and unacceptable risk. Manipulative or exploitative projects, like governmental social scoring, would be outright banned, whereas those with lighter risks may face no intervention or only be required to label all AI-generated content on their platform.
The act regulates AI system providers, and exempts products used for military, research and non-professional purposes.
Google’s Robot Constitution (2024)
Google’s team at DeepMind, an AI-focused research lab, created a set of guidelines as part of its Robot Constitution to ensure the safety and ethical operation of AI-powered robots. Inspired by Isaac Asimov’s Three Laws of Robotics, it includes safety-focused prompts that instruct robots to avoid tasks involving humans, animals, sharp objects and electrical appliances. This constitution is part of the AutoRT system, which uses computer vision and large language models to help robots gauge their environments to make safe, appropriate decisions. It also incorporates traditional safety measures, such as automatic force limits and a physical kill switch, to enhance human-robot interactions.
UNESCO’s Report on Robotics (2017)
In partnership with the World Commission on Ethics of Scientific Knowledge and Technology, the United Nations Educational, Scientific and Cultural Organization published its Report on Robotics, which includes a series of recommendations for human-robot relations. In it, the organizations suggest a code of ethics at every level — from conception to fabrication and usage — as well as national and international conventions that are regularly updated at pace with advancing tech. These principles should prioritize values such as human dignity, autonomy and privacy, and include a regulatory framework of accountability, particularly for autonomous “cognitive” robotics, whether liability is tagged to the manufacturer or end user.
U.S.’s Blueprint for an AI Bill of Rights (2022)
Created by the White House’s Office of Science and Technology Policy under the Biden administration, the AI Bill of Rights, officially the Blueprint for an AI Bill of Rights, outlined five core principles to guide the responsible design and use of AI. Published in 2022, it was developed in collaboration with academics, human rights groups, the public and the leaders of major tech companies.
The document outlined five principles aimed at making AI systems safer, less discriminatory and more transparent, addressing potential civil rights harms in areas such as hiring, education, healthcare and financial services. These principles aimed to provide safe and effective AI-systems that avoid algorithmic discrimination and label automated content while protecting data privacy. It also stated that users should be allowed to opt out of using AI-driven systems if a human-run alternative is preferred.
While the AI Bill of Rights served as a framework to inform AI policy and practice, it was not legally binding. And the Trump administration has since done away with a lot of the AI regulation Biden carried out.
Frequently Asked Questions
What are the three laws of robotics?
The Three Laws of Robotics were introduced by science fiction writer Isaac Asimov. They are:
- A robot may not injure a human or, through inaction, allow a human to come to harm.
- A robot must obey orders given to it by humans except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Is there a fourth law of robotics?
Yes; the fourth law of robotics is referred to as the “Zeroth Law.” It was made to supersede the original three laws. It states, “a robot may not harm humanity, or, by inaction, allow humanity to come to harm,” as a way to avoid unintentional and mass endangerment.
Who created the three laws of robotics?
Isaac Asimov, a science fiction writer and biochemist, originally introduced the Three Laws of Robotics in his 1942 short story “Runaround.”
Do the laws of robotics apply to AI?
Asimov’s Three Laws of Robotics are fictional plot devices, so neither robots nor AI systems must follow these laws in real life. While the laws can be used to evaluate AI systems, they don’t always apply to artificial intelligence in the same way that they do to robots.
