7 Steps You Must Take to Comply With New EU AI Act Standards

Start by understanding the risk classifications for your AI.

Written by Guru Sethupathy
Published on Sep. 03, 2024
A device screen displays type that says The EU Artificial Intelligence Act.
/artificial-intelligence/eu-ai-act-compliance
Brand Studio Logo

The European Union’s Artificial Intelligence Act (EU AI Act) establishes the world’s first comprehensive legal framework for AI systems.

As AI use cases continue to grow in enterprises around the world, strong AI governance is more important than ever. Here are seven best practices to ensure compliance with the EU AI Act that combine the principles of strong governance with the tenets of responsible AI.

Related ReadingWhat Is Artificial Intelligence?


Understand the Scope and Risk Classifications

Not all AI is created the same under the EU AI Act (and most AI regulations, for that matter), and for good reason. In an effort to prioritize AI regulation, the EU AI Act introduces a risk-based approach to regulating AI systems.

The text of the EU AI Act states that the regulation “aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI.” This means that AI used in a generic chatbot will receive a lower level of scrutiny than a recruiting tool that selects which candidates are hired.

Top Steps Toward EU AI Act Compliance

  1. Find out the risk categories for your AI.
  2. Conduct a thorough inventory of all the AI your business uses.
  3. Launch robust risk management processes.
  4. Assemble a cross-functional AI governance task force.
  5. Maintain comprehensive technical documentation for AI systems.
  6. Implement transparency and human oversight measures for AI.
  7. Invest in an AI governance platform.

To comply, organizations must first understand whether their AI systems fall within the Act’s scope and, if so, which risk category they belong to.

Importantly, the Act applies to EU-based companies and to non-EU providers whose AI systems are used in the EU market or affect people within the EU.

Key Actions

  • Familiarize yourself with the four risk categories — unacceptable risk, high risk, limited risk and minimal risk — and the different requirements in each category.
  • Determine the risk level for each of your AI systems by considering key factors such as the deployment location, your organization’s role (whether as the AI provider or deployer), the specific use case and the data and technology used.
  • Pay special attention to high-risk AI applications, which include systems used in common areas such as HR technology, financial services insurance underwriting and education.

If you are not sure if and how your AI applications fall under the EU AI Act, it’s a good idea to begin engaging your legal, compliance and risk management teams early.

 

Conduct a Comprehensive AI Inventory

Once you have determined that the EU AI Act applies to at least one of your AI applications, it’s time to start your AI inventory.

Many teams are unaware of the number of models they are running and where, which exposes them to significant business risk, including compliance violations, reputational risk and security vulnerabilities.

Key Actions

  • Document each system’s functionality, capacity and the team responsible for it.
  • At a minimum, this inventory should include each tool’s purpose, data usage, risks and compliance status.
  • Regularly update the inventory to reflect any changes in AI models, ensuring ongoing compliance.

 

Implement Robust Risk Management Processes

For high-risk AI systems, the EU AI Act mandates the implementation of a risk management system that operates throughout the AI system’s entire lifecycle.

Key Actions

  • Develop a systematic process to identify, analyze and mitigate risks associated with your AI systems. Ensure this risk management approach is ongoing and iterative, not a one-time assessment.
  • Establish structured testing, formal review and approval workflows to thoroughly validate AI systems before deployment.
  • Implement logging and issue remediation procedures to identify, track, and resolve any problems that arise with the AI systems.
  • Document all risk management activities thoroughly, including measures to address identified risks.

Effective risk management ensures compliance and enhances the overall quality and trustworthiness of your AI systems.

 

Assemble a Diverse AI Governance Task Force

Many organizations are beginning to assemble AI governance task forces to manage their AI governance programs. This cross-functional approach ensures comprehensive oversight and implementation of AI governance principles.

A strong task force enables an organization to benefit from the perspective and expertise of multiple business units and stakeholders.

Key Actions

  • Form a diverse AI governance task force with representation from legal/compliance, IT/technology, human resources, data privacy and relevant business units.
  • Define clear roles and responsibilities for each member of the task force.
  • Establish regular meetings and reporting mechanisms to ensure ongoing governance and alignment with EU AI Act requirements.
  • The task force should also take the lead on establishing AI-related policies for the entire organization. Well-documented and communicated policies can help ensure compliance at all levels of the company. Policies covering accountability, AI ethics, fairness and transparency are fundamental to maintaining consistent compliance within the organization.

 

Maintain Comprehensive Technical Documentation

The Act requires detailed technical documentation for high-risk AI systems, providing transparency about the system’s development, functioning and compliance with the Act.

Key Actions

  • Create and maintain thorough documentation of your AI systems’ design, development and operational processes.
  • Include detailed information on data governance practices, including how data quality is ensured and maintained.
  • Prepare documentation that would be required for conformity assessments, particularly for high-risk AI systems.
  • Keep records of any internal or external audits, as these may be valuable for future conformity assessments.
  • Ensure documentation is up to date and easily accessible for potential audits.

Proper documentation not only satisfies regulatory requirements but also facilitates better understanding and management of your AI applications. It also supports the registration process, if applicable.

 

Implement Transparency and Human Oversight Measures

Transparency and human oversight are key principles of the EU AI Act, particularly for high-risk and limited-risk AI systems.

Key Actions

  • Develop clear communication protocols to inform users when they are interacting with an AI system.
  • Ensure users have access to transparent, understandable information about the AI system’s capabilities and limitations.
  • Create processes for regular data quality checks and updates, ensuring that AI systems continue to operate on high-quality, relevant data throughout their lifecycle.
  • Enable human oversight with intervention capabilities and processes.
  • Assign responsibility for AI development, review, approval and ongoing monitoring to create accountability.

These measures build trust with users and help prevent misuse of or over-reliance on AI systems.

 

Invest in an AI Governance Platform

Organizations should consider investing in a comprehensive AI governance platform to effectively manage compliance with the EU AI Act and other emerging AI regulations.

As the EU AI Act comes into force and additional AI regulations continue to multiply, many organizations will require a platform or tools to support their AI governance programs.

Key Actions

  • Look for platforms that provide real-time regulatory tracking, risk assessments and automated reporting and documentation.
  • Ensure the chosen platform can scale with your organization’s AI initiatives and adapt to evolving regulatory landscapes.
  • Implement the platform across your organization and train relevant staff on its use.

An AI governance platform can significantly streamline compliance efforts, reduce costs and mitigate AI compliance risks. It can help organizations stay ahead of regulatory updates like the EU AI Act, manage their AI inventory and ensure consistent governance practices across different AI projects and departments.

Further ReadingShould You Hire a Chief AI Officer?


Embrace Compliance as a Competitive Advantage

While the EU AI Act introduces new challenges for organizations developing and deploying AI systems, it also presents an opportunity to build more trustworthy, ethical and robust AI.

The Act’s enforcement timeline provides a roadmap for preparation:

  • August 1, 2024: The Act entered into force
  • February 2, 2025: Ban on prohibited AI applications takes effect
  • August 2, 2025: Requirements for general-purpose AI systems become applicable
  • August 2, 2026 (General Applicability): All rules of the AI Act become applicable

Given the significant penalties for non-compliance — up to €35 million or 7 percent of global annual turnover for severe breaches — organizations should not wait to get started. But remember that compliance with the EU AI Act is not just about avoiding penalties. It’s about building AI systems that are safe, ethical and respectful of fundamental rights. By aligning with these standards early, organizations can future-proof their AI strategies and contribute to the development of trustworthy AI on a global scale.

Explore Job Matches.