Why We Need AI Governance Now

Only by establishing robust governance mechanisms can we harness AI’s advantages while mitigating its potential liabilities.

Written by Mike Hyzy
Published on May. 14, 2024
Why We Need AI Governance Now
Image: Shutterstock / Built In
Brand Studio Logo

In the not-too-distant future, artificial intelligence could transform industries, reshape our daily lives and alter the balance of global power. 

The scale of AI adoption is unprecedented, much faster than any other disruptive innovation. ChatGPT needed a mere five days to reach 1 million users, compared to two-and-a-half months for Instagram and 10 months for Facebook.

Considering the speed at which this technology is evolving, the governance of AI has emerged as a critical challenge. Effective oversight mechanisms are essential to harness AI’s potential while mitigating risks. The complexity of AI governance, however, requires a delicate balance between innovation and regulation.

Challenges of Adopting AI Governance

  • Governments struggle to keep up with the rapidly evolving nature of AI.
  • Expect political battles over what should be governed and what should be an inherent constitutional freedom.
  • AI is a complex technology, making it difficult to explain to those not in the know.
  • Finding and enforcing a consensus on AI’s societal goals and ethical principles is an extraordinary global challenge.

More by Mike Hyzy5 Ways to Amplify Product Management With GenAI


What Is AI Governance?

AI governance is the established set of guidelines, policies and practices that govern the ethical, legal and societal implications of AI and machine learning technology. The purpose of establishing these guardrails is to make sure that the emerging technologies are developed and used responsibility, without harm to individuals or society.

The United States and the European Union have varying degrees of policies in place to establish the framework for AI governance.

The AI Act, passed by the E.U. parliament in March, is the first ever-comprehensive legal framework on AI. It includes E.U.-wide rules on data quality, transparency, human oversight and accountability. The act is part of a large strategy encompassing the AI Innovation Package and the Coordinated Plan on Artificial Intelligence, promising to fortify AI safety and uphold fundamental rights across Europe. Overseeing its enforcement is the new European AI Office. 

Meanwhile, on the other side of the pond, the best the U.S. government has done is draft legislation that is currently pending to regulate how the federal government and its respective agencies can use AI.

President Biden signed an executive order on “the safe, secure and trustworthy development and use of artificial intelligence,” and while there are some good points included, it’s fluffy, focused more on the internal working of the government and assigns no accountability for anything resembling enforcement.  

The failure of robust federal solutions has led to some states taking solutions up on their own

  • Eight states have enacted legislation.
  • Nine states have enacted and proposed legislation.
  • 14 states have proposed legislation.
  • 19 states have no legislation proposed.

Our representatives in the U.S. government have a responsibility to draft reasonable, responsible legislation to help govern artificial intelligence. When the rogue robots are tearing through our society, at least we will know we can rely on our European neighbors for help since they were able to responsibly put guidelines in place.


The Necessity of AI Governance

AI governance is crucial to ensure that AI technologies align with organizational strategies, objectives and values while fulfilling legal requirements and ethical principles. Governance mechanisms are essential to managing risks, upholding ethical standards, protecting public welfare and preventing the misuse of AI. 

Humans can create a holistic approach through regulatory frameworks, industry standards and organizational policies to ensure we manage the machines. 

Governments can establish laws and regulations that set boundaries for AI development and deployment, such as data protection and anti-weaponization laws. The European Union framework above can be replicated and versions of it used across the world. 

Industry bodies can develop standards and best practices to guide responsible AI development and avoid inequality, job displacement and redistribution. An excellent example of this is the actors’ union taking a stand and negotiating provisions that help protect actors from having their likenesses exploited without their knowledge or fair compensation. 

Effective AI governance requires ongoing monitoring, assessment and adjustment as the technology evolves. History offers cautionary tales of the consequences of inadequate oversight, from discriminatory algorithms to privacy violations, underscoring the critical importance of proactive governance measures.


Challenges of AI Governance

Recent governance battles at AI disruptors like OpenAI underscore the pressing need for robust frameworks that can navigate the ethical and operational complexities posed by AI technologies. Governance is complex due to the technology’s rapid evolution, global impact and the intrinsic unpredictability of machine learning processes.

Because of the speed at which AI technology is advancing, it is going to be hard for regulatory frameworks to keep up. Government has never been known for moving quickly. There are a lot of gray areas, and coming to a consensus on what is governed and what are inherent constitutional freedoms will be a political battle.  

AI is also extremely complex. Go try to explain deep learning models to your representative — the only math they know is how to count up their fundraising dollars. The complexity of the technology we are dealing with makes it a challenge for transparency and accountability.

The Turku School of Economics review on AI governance identifies key challenges in operationalizing governance frameworks that are adaptable and inclusive, ensuring that AI’s development is aligned with broader societal goals and ethical principles. Having a consensus of what those are, and enforcing them, will be one of the extraordinary challenges for humanity.


Potential Downsides of Overregulation

While the need for governance is clear, overregulation poses its own risks. Excessive controls could asphyxiate innovation, hinder the economic potential of AI, and result in a labyrinth of bureaucratic red tape that could hinder progress.

Overregulation particularly impacts technologies like generative pre-trained transformers, or GPTs, where the agility and creativity inherent to AI can be significantly curtailed.

GPTs thrive on vast data sets and flexible algorithms to generate responses that are contextually relevant and innovatively human-like. Excessive regulatory constraints limit the scope of data these models can access and use, potentially reducing their effectiveness and adaptability. 

While regulations are necessary to manage risks and ensure safety, they must be designed to encourage continued maturation and advancement in AI technologies rather than inhibit them. The crux lies in finding a balance that fosters innovation while safeguarding ethical and societal norms. The optimal path is one of balance, critical to gaining AI’s potential benefits despite its risks.

More on AI RegulationWill This Election Year Be a Turning Point for AI Regulation?


Possible Futures Shaped by AI

In The Signals Are Talking, Amy Webb illustrates three possible outcomes when designing future scenarios: probable, plausible and possible. 

In a plausible scenario, AI revolutionizes healthcare, sustainability and education, improving the quality of life for all.

The more probable future, however, is that AI’s benefits are distributed to humanity unevenly, exacerbating inequalities and raising privacy concerns. There is also the possible catastrophic future where AI becomes a tool for oppression, with tyrannical governments using it for surveillance and control with the help of an artificial general intelligence army.

These scenarios highlight the critical importance of proactive AI governance to ensure that AI aligns with human values and benefits all of humanity. The trajectory of AI development will be shaped by the choices we make today.

Hiring Now
Healthtech • Information Technology • Mobile • Productivity • Software • Analytics • Telehealth