UPDATED BY
Ellen Glover | Nov 13, 2023

 On October 30, 2023, President Joe Biden issued an executive order that establishes new standards for AI safety and innovation. Spanning more than 100 pages, the order addresses issues like algorithmic discrimination, deep fakes, AI-related job-loss and national security threats. It also lays out provisions for encouraging the development of AI in the United States, including efforts to attract foreign talent to U.S. companies and laboratories.

This order comes at a time of immense pressure for the government to regulate the booming AI industry. Over several months, AI companies have testified in front of Congress, briefing lawmakers on the technology’s potential benefits and pitfalls. Meanwhile, researchers and experts have urged the federal government to pump the brakes on AI’s progress until we have a better understanding of its capabilities.

What Is the Executive Order on Safe, Secure and Trustworthy AI?

The Executive Order on Safe, Secure and Trustworthy AI is an executive order signed by President Joe Biden on October 30, 2023. It outlines a plan to regulate and monitor the risks and harms associated with artificial intelligence, and promotes innovation and competition in the industry.

With this executive order, President Biden appears to be taking the middle path, allowing AI companies to continue their work largely undisturbed, while imposing some modest rules. And it directs several federal agencies to begin making standards for how AI can and should be used responsibly across a spectrum of cases, from warfare to healthcare.

“This is a roadmap,” Beth Simone Noveck, director of the Burnes Center for Social Change at Northeastern University, told Built In. “It’s a plan to make a plan around AI.”

 

What Is in the Executive Order on AI?

President Biden’s executive order on the safe, secure and trustworthy development and use of artificial intelligence builds on eight core principles:

1. Safety and Security

The order requires companies making the largest AI systems to notify the federal government and share results of their AI safety testing before releasing their models to the public. It also calls for effective labeling to help people identify AI-generated content.

2. Promoting Responsible Innovation, Competition and Collaboration 

In addition to calling for investment in AI education and research, the order acknowledges the need to address questions surrounding intellectual property that have become a source of conflict between content creators and AI companies training their models on available data. It also calls out the need to protect startups from being overrun by established companies through efforts to limit their access to data, infrastructure or computing power.

3. Supporting American Workers 

The order highlights the necessity of protecting workers from AI-powered surveillance and from implementations of AI that is unsafe, makes human jobs worse or disrupts the labor force in a way that harms workers.

4. Protecting Equity and Civil Rights

Referencing the 2023 AI Bill of Rights, the order reinforces the need to ensure AI systems do not cause or exacerbate bias and discrimination. To that end, it calls for regulation and ongoing oversight of AI systems, and “engagement with affected communities” about those efforts.

5. Maintaining Consumer Protections

The order clarifies that companies cannot use AI to circumvent protections against “fraud, unintended bias, discrimination, infringements on privacy or other harms,” particularly in sectors like finance, housing, healthcare, education, transportation and the legal profession.

6. Protecting Privacy and Civil Liberties

Acknowledging the potential of AI systems to undermine existing privacy protections through data aggregation and inferences drawn across multiple data sets, the order calls for federal agencies to comply with existing laws in how they collect and use data, and to bolster their efforts to protect the data they collect.

7. Using Responsible AI to Help the Government Work Better

The order urges the government to recruit and train a diverse workforce of AI professionals who can responsibly use the technology to make the government more effective at serving the American public. It also says that every federal agency has to appoint its own Chief AI Officer, who will be responsible for coordinating their agency’s use of AI, promoting AI innovating within their agency, and for managing risks from their agency’s use of AI. 

8. Creating an International Framework for Responsible AI

Finally, the order calls for the U.S. government to work closely with other governments to promote AI safety and global standards for managing the risks of AI and protecting human rights.

More on Tech and the LawData Privacy Laws Every Company Should Know

 

What Is the Goal of the AI Executive Order?

1. Set Policy Direction for Federal AGENCIES

Primarily, the executive order has set policy direction for specific federal agencies and departments to regulate artificial intelligence in their respective fields. For example, the Department of Labor is responsible for developing best practices to help employers mitigate the potential harms of AI to their workers, and the Department of Commerce is tasked with identifying existing standards, tools and methods for detecting and labeling AI-generated content.

“[Agencies] know their own responsibilities better,” AI entrepreneur Victorio Pellicano said. “It’s inconceivable that the White House could direct it all the way down the chain.”

 

2. Encourage Responsible AI Innovation

More broadly, the order signals the Biden-Harris administration’s strategy for responsible AI innovation. It builds on the voluntary safety commitments made earlier this year by 15 AI companies, as well as the AI Bill of Rights, which is a set of non-enforceable guidelines for responsible design and use of AI that the White House published last year. It also pulls a lot from the comprehensive AI risk management framework that the National Institute of Standards and Technology (NIST) issued in January of 2023.

“The posture of this is largely about risk mitigation,” Noveck said. “The document is designed to allay those fears and respond to the threats and the risks.”

The order also prioritizes AI innovation. It doesn’t place any explicit restrictions on AI companies themselves in terms of the kinds of models they can develop, how big they can be or what data can and cannot be used to train them. It doesn’t try to curb their use of copyrighted material as training data, although that has become a hot-button issue. And it doesn’t require them to register for any licenses or make them publicly disclose any proprietary information.

Up until this point, the U.S. government has taken a mostly hands-off approach to AI, especially when compared to the EU and other governments. Attorney Colin S. Levy doesn’t expect that will change much, even in light of this executive order.

“In terms of balancing development, innovation and regulation, I think we’re definitely erring more on the development and innovation side,” Levy told Built In. “We want to allow businesses to continue to thrive, to develop things and experiment, but while also protecting consumers.”

Related ReadingAI-Generated Content and Copyright Law: What You Need to Know

 

Is the Executive Order Enforceable?

While executive orders are not the same as federal legislation passed by Congress, they are still enforceable by specific agencies and departments as directed by the executive branch.

But the specifics of what enforcement of this executive order will actually look like remain to be seen, since executive orders alone are limited, Tim Fist, a fellow with the Technology and National Security Program at CNAS, told Built In. “The government is pretty restricted in what it can do just with an executive order. You’re not going to be able to get any new powers for agencies. You need Congress for that.”

“The government is pretty restricted in what it can do with just an executive order.”

So if a big tech company were to disclose to the federal government that an AI model it’s developing poses national security risks, it is unclear how the Department of Homeland Security would go about stopping that model from being released to the public. The order lays very broad principles and thematic ideas, without specifying how they will be implemented or enforced, Levy said.

“Now that this order has been put out there, there certainly is going to be a lot of follow-up work at the government level to try to put into place some of these various systems and processes — teeth, if you will — that are missing from the executive order in terms of enforceability,” he explained.

Find out who's hiring.
See jobs at top tech companies & startups
View All Jobs

 

What Comes After the Executive Order?

The order imposed a series of deadlines on federal agencies to issue their reports and guidelines for AI in the coming months.

The Biden-Harris administration is also establishing an AI Safety Institute within NIST, which will carry out the agency’s existing AI risk management framework by creating new guidelines, tools, benchmarks and best practices for evaluating and mitigating risk. And NIST has been tasked with developing new technical guidance that regulators can use while considering more stringent rules and enforcement measures going forward.

Whether this executive order will actually prompt congressional action is up for debate. Up to this point, Congress has shied away from creating news laws surrounding AI, and all attempts to regulate AI in the private sector so far have struggled to gain traction. But Levy expects Congress will actually have to act to give some more muscle to this executive order. “How far those regulations go in terms of not just being drafted but actually being passed by Congress, I think is a very open question right now.”

Of course, there’s always the possibility of the order being edited — or jettisoned entirely — in the future. The U.S. is coming up on a presidential election, and a potential new president could undo the order, especially if they don’t think it is relevant or necessary anymore.

As artificial intelligence continues to permeate every aspect of our lives, the notion of maintaining a specialized set of rules may not make sense in the long run, Noveck said. “It may be superseded by something else over time. And that’s normal.” 

Great Companies Need Great People. That's Where We Come In.

Recruit With Us