Will This Election Year Be a Turning Point for AI Regulation?

As Americans prepare to head for the ballot box, AI regulation is at a crossroads.

Written by Ellen Glover
Will This Election Year Be a Turning Point for AI Regulation?
Image: Shutterstock
UPDATED BY
Ellen Glover | Jul 17, 2024

To date, the United States government’s approach to AI regulation has been swift but incomplete — a patchwork of policies, best practices and industry-specific rules. Congress has made several attempts to establish more robust and sweeping legislation, but it has largely failed, leaving no laws in place that specifically limit the use of artificial intelligence or regulate its risks.

Now, in an election year, the path forward for AI regulation in the U.S. remains uncertain. While both political parties agree that AI must be controlled and monitored for our safety, the specifics of what exactly that entails are still up for debate, and a lack of Congressional action could hinder progress that’s already been made.

 

Calls for AI Regulation Are Increasing

Thanks to the popularity of artificial intelligence tools like ChatGPT, both the promise and perils of generative AI are front and center, sparking widespread concerns around discrimination, copyright infringement and misinformation (particularly as it relates to elections). 

Consequently, demand for AI regulation in the United States has never been higher. In 2023, more than 450 organizations — many of them top players like OpenAI and Anthropic — spent resources lobbying for artificial intelligence issues. This marks a 185 percent increase from 2022, according to federal lobbying disclosures analyzed by nonprofit organization OpenSecrets. And a recent poll conducted by the AI Policy Institute found that the majority of voters prefer candidates who support regulating AI.

Amid this pressure, lawmakers have introduced a “tsunami” of new legislation, said Dominique Shelton Leipzig, an AI governance expert and author. Dozens of bills were proposed in 2023 and new ones are being created “on the daily,” Leipzig told Built In, addressing everything from the use of AI in nuclear launch decisions to the watermarking of AI-generated content. 

And these bills are coming from both Republicans and Democrats, which makes artificial intelligence a rare unifying issue for an otherwise deeply divided Congress.

“There’s a united front here,” Leipzig said. “I’ve seen collective bipartisan support for guardrails around AI.”

More on Tech and the LawHere’s a Breakdown of U.S. Data Privacy Laws

 

Political Parties Disagree on What to Prioritize

Still, bipartisan enthusiasm for AI regulation may not be enough to push any bills across the finish line before election day. With mere months to go until Americans head to the ballot box, Congress does not appear ready to pass meaningful legislation on AI, according to a recent CNN report, and progress will likely slow down as legislators gear up for reelection.

Artificial intelligence isn’t necessarily a major culture-war issue, but it does have a slight partisan divide, which may be hindering Congress’s progress, Ravit Dotan, an AI ethics expert who closely monitors AI regulation, told Built In.

According to her research, bills sponsored by Democrats tend to focus on things like general AI ethics and fairness, while bills sponsored by Republicans are more focused on data privacy rights and national security.

The AI Policy Institute poll also detected a party divide among voters. As the purported party of the free market, Republicans were generally less favorable toward regulating AI than their Democratic counterparts. But Republicans also took a more skeptical view of the technology than Democrats, with the majority saying it would be bad for “working class people.”

Issues like hiring discrimination, data privacy and job security are all hot-button concerns when it comes to artificial intelligence. So these political nuances will inevitably affect Congress’s ability to have measured debates about AI-related matters, or pass bipartisan laws being floated in the Senate right now.

Recommended ReadingWhat the Next President’s Economic Policies Will Mean for Tech

 

As Congress Stalls, Others Pick Up the Slack

While Congress remains stalled, federal agencies and departments have been left to pick up the slack, relying on existing laws to implement guidelines and initiatives for the use of AI in the industries under their jurisdictions. And President Biden has taken AI regulation just about as far as it can go within his powers, having passed an executive order in October of 2023.

Companies are also self-regulating, imposing their own rules for the responsible and ethical development and deployment of AI in the absence of legislation. But this, of course, is not a complete answer to the problem.

“Comprehensive regulation cannot be replaced by private industry using voluntary codes of conduct or industry-led best practices,” Evi Fuelle, head of policy at AI governance company Credo AI, told Built In. “While valuable, these are no replacement for mandatory safeguards and guardrails set by a government entity responsible for the well-being of its citizens.”

Related ReadingWhat Is Ethical AI?

 

Elections Could Change the Future of AI Regulation

If Congress cannot pass legislation this year, the guidelines provided to regulators by the Biden administration could be overturned on day one. So far, all of the nationwide actions on AI regulation have come exclusively from the executive branch. And without congressional support, his administration’s AI policies are vulnerable to the whims of another president — even the executive order, which could be edited or completely discarded by a future administration.

Biden appears to have a moderate stance on the technology, allowing AI companies to continue their work largely undisturbed while imposing some modest rules. His executive order seeks to mitigate risks like bias and privacy, building off the non-enforceable AI Bill of Rights the White House published in 2022. But it mostly focuses on fostering innovation, with few explicit restrictions on AI developers.

If Biden is reelected, “I assume we will continue the same trajectory,” Dotan said.

That trajectory will likely change if Donald Trump is elected, though.

Some of the former president’s allies are drafting their own executive order, which would take a different approach than the one Biden laid out. According to The Washington Post, the new order would launch a series of “Manhattan Projects” to develop AI-powered military technology and establish several “industry-led” agencies to evaluate AI models and secure systems from foreign adversaries. It would roll back “unnecessary and burdensome regulations” and instead “make America first in AI,” the Post reported.

Additionally, the Republican National Committee said in a recent policy roadmap that it plans to repeal Biden’s AI “dangerous” executive order, claiming it “hinders AI Innovation and imposes Radical Leftwing ideas” on the technology. “In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing,” the statement continued.

While Trump has received endorsements from key tech-industry figures — like Elon Musk and venture capitalists Marc Andreesen and Ben Horowitz — he has not personally laid out any official plans for AI regulation of his own, nor has he publicly commented on the new executive order and the RNC’s plans. When asked for comment by the Post, his campaign linked to a 2023 blog post stating that “unless a messaging is coming directly from President Trump or an authorized member of his campaign team, no aspect of future presidential staffing or policy announcements should be deemed official.”

Hiring Now
Braze
Marketing Tech • Mobile • Software
SHARE