UPDATED BY
Hal Koss | Mar 19, 2024

The AI Bill of Rights is a set of guidelines for the responsible design and use of artificial intelligence, created by the White House Office of Science and Technology Policy (OSTP) amid an ongoing global push to establish more regulations to govern AI.

Officially called the Blueprint for an AI Bill of Rights, the document, published in October of 2022, is the result of a collaboration between the OSTP, academics, human rights groups, the general public and even large companies like Microsoft and Google. It suggests ways to make AI more transparent, less discriminatory and safer to use. It also addresses the current and potential civil rights harms of AI, particularly in areas like hiring, education, health care, access to financial services and commercial surveillance.

What Is the AI Bill of Rights?

The White House’s Blueprint for an AI Bill of Rights outlines a set of principles to help responsibly guide the design and use of artificial intelligence.

Up to this point, Washington has largely failed to keep up with the rapidly advancing AI sector. But the AI Bill of Rights, which likely reflects the Biden administration’s approach to AI regulation, could be a sign of more government action to come.

Related ReadingWhat Is Responsible AI?

 

The 5 Principles of the AI Bill of Rights

The AI Bill of Rights includes a set of five broad principles to help guide the development, use and deployment of AI systems — all of which are designed with the civil rights of the American public in mind.

Each principle provides examples and practical steps companies, governments and other organizations can follow in order to incorporate these protections into their own policies and practices, as well as guidelines for the overall design of the technology.

 

1. Safe and Effective Systems

This principle says that everyone deserves protection from automated systems that are unsafe to use or ineffective. They should also be protected from “inappropriate or irrelevant” data use in the design, development and deployment of these systems, and from the compounded harm of its reuse.

To ensure this, the OSTP suggests that diverse groups of independent parties and domain experts be involved in the development of AI systems. These systems should undergo “pre-deployment testing, risk assessment and mitigation,” as well as ongoing monitoring, to make sure that they are compliant with existing domain-specific standards and that they aren’t being used beyond the scope of their intended application. Removing a system from use — or never using it to begin with — is an option posed as well. And all of the information gathered during these independent evaluations of a system’s safety and effectiveness should be made public when possible.

 

2. Algorithmic Discrimination

Algorithmic discrimination occurs when certain people are treated unfairly or unfavorably by an automated system as a result of its biased training data. AI models make predictions and judgments based on a foundation of data, and if that foundation has prejudiced, distorted or incomplete information, the outputs generated could reflect that and even magnify it — leading to the discrimination of certain people or groups that could affect their employment, housing, access to public services and more. Depending on the circumstances, algorithmic discrimination may even be against the law.

In order to prevent this, this principle suggests that the people in charge of developing and deploying AI systems take “proactive and continuous measures” to make sure they are designed fairly. This includes things like equity assessments, using data that is representative of various different groups of people and perspectives, making sure the system is accessible for people with disabilities, testing for and addressing any biases that do come up, and clear organizational oversight. The principle also suggests that all independent evaluations and reports assessing the impact of an algorithm be written in a clear and understandable way to ensure these protections are being upheld.

 

3. Data Privacy

This principle states that everyone should have control over the personal data they generate online, and how it is collected and used by companies. It also says designers and developers should ask for permission in a clear and understandable way regarding the collection, use, access, transfer and deletion of people’s personal data — and respect their wishes “to the greatest extent possible.” When it is not possible, organizations need to have “alternative privacy by design” safeguards that prevent abusive data practices.

While all data should be protected, the OSTP says there should be more “enhanced” protections for information in more sensitive areas like health, work, finance, criminal justice and education.

By extension, the principle states that continuous surveillance should not be used in education, work, housing or any other context where it could likely lead to limiting one’s rights or opportunities. And, if surveillance must be used, these systems should be subject to “heightened oversight.”

Keep ReadingData Privacy Laws Every Company Should Know

 

4. Notice and Explanations

This principle states that people should be informed when an automated system is being used in a way that could affect them, and should be provided with an explanation for how the given system works, the role of automation, why the system arrived at the decision it did, and who is responsible for the decisions it makes. All of this information should be provided in plain, easy-to-understand language by the developers and operators of this given system in a clear, timely and accessible manner. And if there are any significant changes to how the system is used, users should be notified.

 

5. Human Alternatives, Consideration and Fallback

If someone decides they’d rather opt out from an automated system in favor of a human alternative, the OSTP says they should be able to do that “where appropriate.” The decision of what is appropriate should be based on what is reasonable in a given context, and should prioritize ensuring accessibility and protection from especially harmful impacts. In some cases, a human or other alternative may even be required by law.

The principle also states that users should have timely access to a human should the AI system fail, produce an error, or generate an outcome that the user wants to appeal or contest. This process should be “accessible, equitable, effective, maintained, accompanied by appropriate operator training and should not impose an unreasonable burden” on the user or the public.

Related ReadingAI Ethics: A Guide to Ethical AI

 

Is the AI Bill of Rights Enforceable?

No, the White House’s Blueprint for an AI Bill of Rights is not an enforceable, legally binding document. Rather, it is a “series of suggestions,” as Patrick Lin, a technology law researcher and author of Machine See, Machine Do, put it.

“I think it has the right idea, but it doesn’t carry the weight of the law,” Manasi Vartak, the founder and CEO of operational AI company Verta, told Built In. “I look at it as a first step to any sort of law. Now that we know what principles we want to protect, we can write a law that’s around that.”

 

Existing AI Legislation

Federal AI Laws

Currently, there is no federal legislation that explicitly limits the use of artificial intelligence, or protects citizens from its harms. However, some federal guidelines and protections do exist.

On October 30, 2023, President Joe Biden issued the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. This executive order requires AI systems developers to share their safety test results with the U.S. government if such results show that the AI could pose a risk to national security. It also requires several federal agencies to develop guidelines and standards for AI safety and security.

Even before the executive order was issued, several federal agencies had implemented guidelines and initiatives for their own responsible use of automated systems, including the Department of Defense, the U.S. Agency for International Development and the Equal Employment Opportunity Commission.

And at least a dozen agencies have issued some kind of guidance for the use of AI in the industries under their jurisdictions, including the Federal Trade Commissionthe U.S. Copyright Office and the Food and Drug Administration. Various agencies also published a joint statement on enforcing discrimination law on AI systems.

But like the AI Bill of Rights, most of these documents from government agencies offer only suggestions; they are not enforceable by law. And the ones that are reinforced by legislation simply double down on laws that already exist for discrimination and fair use. They do not offer solutions or restrictions that have been tailored to the unique challenges and dangers of artificial intelligence.

 

State and Local AI Laws

Things look different on the state level, where laws are slowly but steadily being created to tackle specific issues related to AI systems. Colorado enacted a law that regulates how insurers can use big data and AI-powered predictive models to protect consumers from unfair discrimination. California made it illegal for a person or entity to use a chatbot to sell someone something or influence their vote without disclosing that it’s a chatbot. And Illinois became the first state to enact restrictions on the use of AI in hiring.

Meanwhile, New York City passed a law in 2021 that requires companies to notify job applicants when they’ve used a hiring algorithm, and that those companies have independent auditors check the technology annually for bias. The city, along with several others, have also sought to ban or restrict the use of facial recognition technology among police, with varying degrees of success.

“It’s a fragmented landscape,” Vartak said of AI regulation in the United States. “Pointed legislation I think will be states. But sweeping legislation does need to be at the federal government level.”

Still, 25 more states, Puerto Rico and Washington, D.C. have introduced AI bills in 2023, with states like Connecticut and Texas establishing political bodies to monitor local AI development. The pressure for the federal government to act will likely mount as U.S. states take AI regulation into their own hands. 

 

AI Laws Outside of the U.S.

Countries outside of the U.S. are making more sweeping legislative moves. China, for example, is way ahead of the U.S. on the AI regulation front, imposing several laws addressing certain aspects of the development, deployment and use of AI systems. And the European Union is enacting “some of the most protective AI regulations that we’re seeing today,” according to Lin.

This has mainly come in the form of the E.U. AI Act, which was approved in by the European Union’s parliament in March of 2024. Much like how the E.U. became a global leader in data protection laws, the commission aims to set new standards for AI oversight in a bid to create what it refers to as “trustworthy AI.” It organizes automated systems based on their risk to people’s health, safety and personal rights, and lays out regulations accordingly. So a system determined to be high-risk, such as facial recognition, will have a lot more oversight and regulations than one deemed to have minimal risk, like a chatbot.

The AI Act also includes outright bans on automated systems that the E.U. has deemed to be unacceptable. “Things like biometric surveillance, emotion recognition and predictive policing — they’ve actually included amendments in their AI Act saying that these are specific applications that we don’t want to use at all,” Lin said.

Like the AI Bill of Rights in the U.S., the AI Act places a big emphasis on data governance — understanding what data was used to train an AI system, and continuously checking that model for any bias. But unlike the AI Bill of Rights, the AI Act is a set of rules with real consequences, not suggestions.

Other governments have ramped up their AI legislation as well, with countries like France, Italy and Japan taking action in response to issues surrounding OpenAI’s ChatGPT. While new political efforts have resulted in everything from advisory initiatives to AI tool bans, this momentum is bound to produce more defined AI laws in various countries.

More on AI and the LawAI-Generated Content and Copyright Law: What We Know

 

Are AI Laws Coming?

Since the release of the AI Bill of Rights, politicianshuman rights organizations and tech innovators alike have called for more explicit AI regulations in the U.S. — perhaps indicating a more robust legal framework isn’t too far off.

Following California’s lead, a handful of states have enacted comprehensive consumer data privacy laws, providing the right to access and delete personal information and to opt out of the sale of their personal information. And several more states have proposed legislation pertaining to everything from the use of AI in sports betting to the use of AI in therapy and other mental health services.

At the federal level, Congress has proposed a privacy bill called the American Data Protection and Privacy Act, or ADPPA, to regulate how organizations can keep and use consumer data, which could directly affect how companies develop and use artificial intelligence. The bill received bipartisan support, but it has since stalled in Congress, and it remains to be seen whether it will advance. Another proposed bill requiring audits of AI has failed to garner support and was rejected.

“We’re going to have to be nimble in how we regulate the new changes that might be coming.”

Meanwhile, in another branch of the U.S. government, President Biden issued an executive order in October 2023 — roughly a year after publishing the AI Bill of Rights — to establish more concrete guidelines surrounding AI safety and security. This renewed effort to reign in the AI sector and ensure AI products are designed with consumers’ rights in mind comes as the U.S. tries to make up ground with other AI laws.

While there is political will to regulate AI, the path forward will be a difficult one. AI models can be so complex that even the experts don’t understand why they make the decisions they do. How these models arrive at their conclusions, the data they use and the trustworthiness of their results are not easy to figure out, making it even harder to regulate.

And even if the U.S. does manage to regulate artificial intelligence on a federal level, the work will never truly be over.

“AI is evolving so rapidly. With any laws that we write today, how are they going to apply in five years?” Vartak said. “We’re going to have to be nimble in how we regulate the new changes that might be coming.”

 

Frequently Asked Questions

What is a summary of the AI Bill of Rights?

The AI Bill of Rights is a set of principles designed to protect people’s privacy and civil rights by ensuring AI tools’ development is more transparent and monitored for inaccurate and biased data, among other precautions.

Why is the AI Bill of Rights important?

The AI Bill of Rights is important because it encourages companies to account for biased data, user consent, data security and other factors while building AI tools, so consumers may have more positive and safer experiences when interacting with AI technology.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us