What Would a Harris Presidency Mean for AI Regulation?

The Democratic nominee’s vision for AI policy tries to balance safety and progress.

Written by Ellen Glover
Published on Sep. 23, 2024
A photograph of Kamala Harris
Image: Shutterstock

As the Democratic nominee for president, Kamala Harris is in a position to continue shaping AI regulation at a critical time for the industry. The unprecedented growth in the use of artificial intelligence is fueling concerns about privacy, bias and misinformation, making AI regulation a key issue for U.S. policy makers across the political spectrum — especially the next commander in chief.

 

What Has Biden’s AI Policy Been?

Up to this point, President Joe Biden has taken the lead on regulating AI at the federal level, creating a policy framework that aims to encourage responsible AI innovation while protecting consumers from discrimination, privacy infringements and other harms.

If Harris wins the presidency this November, she is expected to continue this approach, recognizing the positive potential of the technology while also being willing to regulate it.

 

What Is Harris’ Track Record on AI Policy?

In her time as vice president, Harris has been the Biden administration’s unofficial “AI czar.” She obtained voluntary commitments from 15 top industry leaders to ensure safe, secure and transparent AI development. She was the senior-most official behind the release of a Blueprint for an AI Bill of Rights, outlining principles for the ethical design and use of AI. And she played a key role in crafting and implementing a White House executive order that outlines how the federal government uses and develops AI, tackling issues like deepfakes to AI-driven job-loss.

Tech regulation was a priority for Harris long before she became Vice President. When she was district attorney of San Francisco and attorney general of California, Harris pushed for laws against cyberbullying and took tech platforms to task over the nonconsensual spread of nude images. She also brokered an agreement with tech giants like Amazon and Apple to strengthen privacy protections for users.

“Harris is engaged with the technology community. I think that’s really beneficial to understanding what is being developed,” Evi Fuelle, director of policy at AI governance company Credo AI, told Built In. “Being able to address privacy risks, being able to address cybersecurity risks, it’s all part of having comprehensive AI governance.”

That said, Harris has forged significant ties with Silicon Valley leaders throughout her career. During her campaigns for AG and senator, as well as her first run for the White House in 2020, she received money from executives at major tech companies like Google, Microsoft and Facebook (now Meta). This time around, leaders from OpenAI, Google and Netflix reportedly hosted a fundraiser for her, and dozens of tech executives have publicly endorsed her. Not to mention Tony West, Uber’s chief legal officer — and, incidentally, her brother-in-law — who took temporary leave to help with her campaign. 

All of this support has led some to question where Harris’ allegiance truly lies when it comes to tech regulation.

“She is close to industry insiders,” Bhaskar Chakravorti, dean of global business at Tufts University’s Fletcher of Law and Diplomacy, told Built In. “To my mind all that means that she will be very cautious about making too many noises about reigning in the industry.”

Related ReadingWhy We Need AI Governance Now

 

How Is Harris Going to Regulate AI?

How exactly a potential Harris administration would specifically address AI policy remains unclear, as the only mention of artificial intelligence on her official website focuses on maintaining economic competitiveness with China.

Still, Harris has stated her commitment to working with Congress to establish formal AI policies — despite its ongoing difficulties in reaching any meaningful agreements. And she has emphasized the importance of protecting the public from AI-related risks while still encouraging innovation. She says she rejects the “false choice” between public safety and progress, and is focused on addressing the more immediate harms over the long-term, existential threats posed by AI. 

“We hear a lot about what many refer to as the ‘race’ or competition with the United States and other countries in terms of R&D investment in AI, ensuring that AI innovation happens at whatever cost,” Fuelle said. “There is an effort on presidential candidate Harris’ part to reject that dichotomy, to say that there is a place for guardrails and there is a need for protecting the public and public interest in the context of AI, and not just innovating at all costs.”

Fuelle added: “Being measured is necessary when you’re approaching AI policy in general. And I think that a potential Harris presidency acknowledges that and respects that.”

To accomplish this “measured” approach, Harris would not have to start from scratch. The framework for U.S. AI governance has already been established by the Biden administration with the Bill of Rights and executive order. All Harris would have to do is enforce these guidelines (a process has already begun as VP) and refine them so that they are more clear.

“There are a lot of ways it can go, even while sticking to the high-level of what the executive order was stating. There are still a lot of details to figure out,” Alon Yamin, CEO and co-founder of AI detection tool Copyleaks, told Built In. “I think a lot of what she does will be in the same spirit [as the executive order], but it will be interesting to see the details and how she approaches it.”

 

What Are Trump’s AI Policies?

The trajectory AI regulation in the U.S. would likely change if Donald Trump is elected. The former president’s campaign platform pledges to repeal Biden’s “dangerous” AI executive order, claiming it “hinders AI Innovation” and “imposes Radical Leftwing ideas” on the technology. In its place, he supports AI development “rooted in Free Speech and Human Flourishing,” the statement continued.

Some of Trump’s allies are reportedly drafting their own executive order, which looks different from the one Biden and Harris laid out. According to the Washington Post, the new order would launch a series of “Manhattan Projects” to develop AI-powered military technology, as well as establish several new “industry-led” agencies to evaluate AI models and secure those made by foreign adversaries. It would also roll back “unnecessary and burdensome regulations” and “make America first in AI,” the Post reported.

Related ReadingHow Deepfakes Threaten the Integrity of the 2024 Election

 

Is AI Regulation a Partisan Issue?

Although this has emerged as a point of contention between Harris and Trump, the need for AI regulation has been a fairly unifying issue for politicians and their constituents up to this point. A recent poll conducted by the AI Policy Institute found that the majority of voters prefer candidates who support regulating AI, regardless of their party affiliation. And hundreds of state and federal bills have been proposed to establish safeguards around AI and monitor the technology — coming from both Republicans and Democrats. 

“It’s unfortunate that what was a bipartisan issue is now being politicized,” Dominique Shelton Leipzig, an AI governance expert and author, told Built In “This is something that is absolutely not political. It is a straightforward, bipartisan issue.”

No matter who wins, AI regulation in the United States stands at a watershed moment with this election. Artificial intelligence is becoming an integral part of everyday life — embedded in everything from healthcare to transportation. Soon, it will be seamlessly woven into nearly every aspect of society. The next president will play a crucial role in shaping how the country navigates this complex terrain for years to come.

“AI is going to be underpinning all of our lives,” Leipzig said. “We can’t afford to get this wrong.”

Explore Job Matches.