How to Navigate AI Regulations to Balance Innovation and Compliance

As governments begin to implement binding frameworks for managing AI, organizations should take note of what the law will now require.

Written by Asher Lohman
Published on Nov. 01, 2024
An AI layout next to a gavel
Image: Shutterstock / Built In
Brand Studio Logo

On September 5, the US, UK and EU signed the first binding treaty on artificial intelligence (AI). This treaty introduces legally binding principles aimed at protecting human rights, ensuring transparency — meaning AI systems must clearly disclose how decisions are made and what data is used — and promoting responsible AI use. This agreement is poised to reshape the landscape for businesses that depend on AI technologies for operations and innovation. As AI continues to transform industries globally, the treaty’s regulations will have far-reaching implications across multiple sectors

Known as the Framework Convention on Artificial Intelligence, the treaty outlines key principles AI systems must follow, including protecting user data, complying with laws and maintaining transparency. Countries that sign the treaty are required to adopt or maintain appropriate measures that align with these principles. 

Although many AI safety agreements have emerged recently, most lack enforcement mechanisms for signatories who break their commitments. The treaty could serve as a model for countries crafting their own AI laws, with the US working on AI-related bills, the EU having passed major regulations, and the UK considering legislation

The AI Convention seeks to safeguard human rights for individuals impacted by AI systems, representing a significant milestone in global efforts to regulate the fast-evolving technology. 

What Is the Framework Convention on Artificial Intelligence?

Signed by the US, UK and EU on September 5, 2024, the AI Convention is the first binding treaty on AI. It outlines key principles AI systems must follow, including protecting user data, complying with laws and maintaining transparency. Countries that sign the treaty are required to adopt or maintain appropriate measures that align with these principles. 

More on AIWhat Is Artificial Intelligence (AI)?

 

Understanding AI’s Impact

AI is expected to significantly impact various sectors, particularly labor and employment. It will complement some jobs, replace others and even create new roles. If not managed responsibly, these changes could lead to economic and social challenges. To ensure a smooth transition, policymakers, employers and unions must tackle these critical areas.

Social Protection

Countries must implement robust safety nets, such as unemployment benefits and income support, to assist workers displaced by AI technologies.

Education and Skills Development

Likewise, both private and public organizations should invest in reskilling and upskilling programs, focusing on digital literacy, AI literacy, and specialized technical skills to prepare the workforce for AI-integrated roles.

Labor Regulations

Nations will need to update labor laws to accommodate emerging AI-driven job roles and establish guidelines for worker protection in automated environments.

Funding Transitions

Finally, they should allocate resources to public and private initiatives that support training programs, educational partnerships, and research into the impact of AI on the workforce. This may include tax incentives for companies that invest in retraining employees and funding for educational institutions to offer AI-focused curricula.

 

Preparing Businesses for AI

For enterprises, this means that fostering AI literacy across all levels of the organization is crucial — not only to remain competitive but also to ensure compliance with evolving regulations. Companies will need to implement comprehensive AI training programs to upskill employees, promote ethical AI practices, and work closely with policymakers to align organizational goals with broader social responsibilities in the AI-driven economy.

Businesses heavily reliant on AI technologies, such as those in finance, healthcare, and manufacturing, must adapt their practices to comply with the new treaty framework. According to the OECD AI Policy Observatory, companies must meet obligations like safeguarding user data, maintaining transparency, and adhering to lawful practices. To mitigate potential disruptions, these industries should invest in AI governance frameworks, audit their existing systems, and assemble cross-functional teams that include legal, compliance and AI experts.

Companies can achieve compliance and build AI literacy by implementing structured, scalable programs tailored to diverse workforce needs. To start, they can collaborate with external consultants who specialize in AI governance and data education, leveraging their expertise to create robust training programs. This helps bridge the knowledge gap, especially in companies lacking in-house AI or pedagogical expertise.

Additionally, integrating AI tools with existing systems requires careful planning and technical alignment; partnering with experienced consultancies ensures seamless integration. By establishing cross-functional AI literacy teams — including legal, compliance and AI experts — companies can continually assess risks, refine strategies, and stay compliant in a rapidly evolving regulatory environment.

 

Navigating Global AI Compliance and Key Challenges

One of the greatest challenges for global companies will be navigating the varying regulatory landscapes across different jurisdictions. With AI laws already enacted in the EU, bills under development in the US, and countries like the UK considering their own AI regulations, businesses must be prepared to address a patchwork of requirements. According to the World Economic Forum, this fragmented regulatory environment may complicate compliance, particularly for multinational corporations with operations spanning multiple regions.

Specific challenges in complying with the AI treaty’s regulations include these key areas.

Navigating Diverse Regulatory Requirements

Different regions have their own AI regulations. For instance, the EU focusing on data protection and transparency, while the US emphasizes innovation. For international companies, understanding and complying with these varied and complex requirements can be resource-intensive. This is due to the need for legal expertise, tailored compliance processes, employee training, technology adaptation, ongoing monitoring, and potential penalties for non-compliance, all of which can significantly strain financial and human resources.

Adapting to Different Compliance Standards

AI compliance standards — unlike regulations, which are mandatory — are voluntary benchmarks developed by industry groups. These standards vary widely, with some regions imposing strict guidelines on fairness and transparency, while others are less stringent. Companies must adapt their practices and develop strategies to meet these diverse compliance standards effectively.

Managing Cross-Border Data Flows

AI systems depend on data that crosses international borders. Complying with stringent regulations while maintaining operational efficiency poses a significant challenge for global companies.

Ensuring AI Governance

Effective AI governance requires robust monitoring, documentation, bias management and data privacy. Companies should implement comprehensive frameworks that include clearly defined policies for algorithm monitoring, thorough documentation of decision-making processes, and strategies for identifying and mitigating bias. Additionally, these frameworks must incorporate strong data privacy measures and regular audits to ensure compliance with varying regulatory expectations while promoting ethical AI practices.

Aligning With Enforcement Mechanisms

Regions have different approaches to enforcing AI regulations, from strict penalties to flexible monitoring. Companies must navigate these mechanisms to avoid penalties and reputational damage while meeting regulatory requirements.

Overall, global companies must develop sophisticated strategies to manage these challenges, including investing in legal and compliance expertise, adopting flexible and scalable AI governance practices, and staying informed about regulatory developments in all jurisdictions where they operate.

 

Ethical AI Development and Future Business Models

The treaty’s focus on protecting human rights and promoting ethical AI development will likely shape future business models and commercial practices. Industries may need to reconsider how they design, deploy, and manage AI systems to ensure compliance with the treaty’s principles. Enterprises will need to prioritize transparency and fairness, especially in areas where AI is used to make critical decisions, such as hiring, credit scoring or healthcare diagnostics.

The most significant compliance risks businesses face under the new AI governance framework include failure to protect user data, lack of transparency, and inadequate legal safeguards. As highlighted by the European Commission’s AI Regulation Overview, companies must establish robust systems to document AI decision-making processes, address bias, and ensure data privacy. Non-compliance could lead to penalties, damage to reputation and potential legal challenges.

As businesses move to adapt, companies will need to allocate resources to ensure they are not only meeting compliance requirements but also fostering innovation within the constraints of the new legal environment source.

More on AIAre You Sure You Can Trust That AI?

 

Balancing Compliance and Innovation

The signing of the first binding international treaty on artificial intelligence marks a pivotal moment in the governance of AI technologies. As businesses around the world come to terms with the regulatory implications of the Framework Convention on Artificial Intelligence, they face both challenges and opportunities. 

Compliance with the treaty’s principles will require significant investment in governance frameworks, data protection, and cross-border cooperation. However, it also offers a chance to innovate within a well-defined legal and ethical framework, enabling businesses to build more transparent, responsible, and human-centered AI systems.

Ultimately, organizations that adapt swiftly to these changes will not only mitigate compliance risks but also position themselves as leaders in ethical AI innovation. By prioritizing responsible AI development, companies can contribute to a more equitable digital future while driving sustainable growth in an increasingly AI-powered world.

Explore Job Matches.