Will This Election Year Be a Turning Point for AI Regulation?

As Americans prepare to head for the ballot box, AI regulation is at a crossroads.

Written by Ellen Glover
Published on Feb. 29, 2024
Will This Election Year Be a Turning Point for AI Regulation?
Image: Shutterstock

To date, the United States government’s approach to AI regulation has been swift but incomplete — a patchwork of policies, best practices and industry-specific rules. Congress has made several attempts to establish more robust and sweeping legislation, but it has largely failed, leaving no laws in place that specifically limit the use of artificial intelligence or regulate its risks.

Now, in an election year, the path forward for AI regulation in the U.S. remains uncertain. While both political parties agree that AI must be controlled and monitored for our safety, the specifics of what exactly that entails are still up for debate, and a lack of Congressional action could hinder progress that’s already been made.

 

Calls for AI Regulation Are Increasing

Thanks to the popularity of artificial intelligence tools like ChatGPT, both the promise and perils of generative AI are front and center, sparking widespread concerns around discrimination, copyright infringement and misinformation (particularly as it relates to elections). 

Consequently, demand for AI regulation in the United States has never been higher. In 2023, more than 450 organizations — many of them top players like OpenAI and Anthropic — spent resources lobbying for artificial intelligence issues. This marks a 185 percent increase from 2022, according to federal lobbying disclosures analyzed by nonprofit organization OpenSecrets. And a recent poll conducted by the AI Policy Institute found that the majority of voters prefer candidates who support regulating AI.

Amid this pressure, lawmakers have introduced a “tsunami” of new legislation, said Dominique Shelton Leipzig, an AI governance expert and author. Dozens of bills were proposed in 2023 and new ones are being created “on the daily,” Leipzig told Built In, addressing everything from the use of AI in nuclear launch decisions to the watermarking of AI-generated content. 

And these bills are coming from both Republicans and Democrats, which makes artificial intelligence a rare unifying issue for an otherwise deeply divided Congress.

“There’s a united front here,” Leipzig said. “I’ve seen collective bipartisan support for guardrails around AI.”

More on Tech and the LawHere’s a Breakdown of U.S. Data Privacy Laws

 

Political Parties Disagree on What to Prioritize

Still, bipartisan enthusiasm for AI regulation may not be enough to push any bills across the finish line before election day. With mere months to go until Americans head to the ballot box, Congress does not appear ready to pass meaningful legislation on AI, according to a recent CNN report, and progress will likely slow down as legislators gear up for reelection.

Artificial intelligence isn’t necessarily a major culture-war issue, but it does have a slight partisan divide, which may be hindering Congress’s progress, Ravit Dotan, an AI ethics expert who closely monitors AI regulation, told Built In.

According to her research, bills sponsored by Democrats tend to focus on things like general AI ethics and fairness, while bills sponsored by Republicans are more focused on data privacy rights and national security.

The AI Policy Institute poll also detected a party divide among voters. As the purported party of the free market, Republicans were generally less favorable toward regulating AI than their Democratic counterparts. But Republicans also took a more skeptical view of the technology than Democrats, with the majority saying it would be bad for “working class people.”

Issues like hiring discrimination, data privacy and job security are all hot-button concerns when it comes to artificial intelligence. So these political nuances will inevitably affect Congress’s ability to have measured debates about AI-related matters, or pass bipartisan laws being floated in the Senate right now.

Recommended ReadingWhat the Next President’s Economic Policies Will Mean for Tech

 

As Congress Stalls, Others Pick Up the Slack

While Congress remains stalled, federal agencies and departments have been left to pick up the slack, relying on existing laws to implement guidelines and initiatives for the use of AI in the industries under their jurisdictions. And President Biden has taken AI regulation just about as far as it can go within his powers, having passed an executive order in October of 2023.

Companies are also self-regulating, imposing their own rules for the responsible and ethical development and deployment of AI in the absence of legislation. But this, of course, is not a complete answer to the problem.

“Comprehensive regulation cannot be replaced by private industry using voluntary codes of conduct or industry-led best practices,” Evi Fuelle, head of policy at AI governance company Credo AI, told Built In. “While valuable, these are no replacement for mandatory safeguards and guardrails set by a government entity responsible for the well-being of its citizens.”

Related ReadingWhat Is Ethical AI?

 

Elections Could Change the Future of AI Regulation

If Congress cannot pass legislation this year, the guidelines provided to regulators by the Biden administration could be overturned on day one. So far, all of the nationwide actions on AI regulation have come exclusively from the executive branch. And without congressional support, his administration’s AI policies are vulnerable to the whims of another president — even the executive order, which could be edited or completely discarded by a future administration.

Biden appears to have a moderate stance on the technology, allowing AI companies to continue their work largely undisturbed while imposing some modest rules. His executive order seeks to mitigate risks like bias and privacy, building off the non-enforceable AI Bill of Rights the White House published in 2022. But it mostly focuses on fostering innovation, with few explicit restrictions on AI developers.

If Biden is reelected, “I assume we will continue the same trajectory,” Dotan said.

That trajectory would likely change under a new president, though.

It is not clear where Republican presidential candidate Nikki Haley stands on AI regulation. As president, Donald Trump released his own AI executive order that many accused of being not robust enough, but the technology has changed a lot since then. That said, the order mainly focused on the need for the U.S. to improve its AI research and development, mentioning the importance of maintaining civil liberties only in passing — all of which seems to be in keeping with his larger goal of promoting U.S. business growth.

“If Trump becomes the Republican candidate, and if he gets elected, I imagine that some AI regulation is going to continue,” Dotan said. But what that could specifically look like remains to be seen, she added. “My impression is that he doesn’t have a super firm opinion yet, which is all the more reason to make it a huge issue during the election.”

Hiring Now
Built In
Consumer Web • HR Tech
SHARE