The New EU AI Directive: Implications for DEI and Legal Departments

Banner artwork by RaffMaster / Shutterstock.com

The European Union has taken a significant step forward by adopting the Artificial Intelligence Act (AI Act) on March 13, 2024. This groundbreaking legislation, the first comprehensive horizontal legal framework for AI, sets stringent rules on data quality, transparency, human oversight, and accountability. For in-house counsel, legal departments, and chief legal officers (CLOs), understanding the implications of this directive is crucial, particularly in the context of diversity, equity, and inclusion (DEI).

First, some background on the AI Act

The AI Act, which includes specific provisions for generative AI and general-purpose AI, reflects the evolving landscape of AI applications. The AI Act defines AI systems by focusing on two key characteristics: autonomy and the ability to infer. An AI system operates with varying levels of autonomy and infers from the input it receives to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. These elements of “infers” and “autonomy” clearly differentiate AI systems from traditional software, where outputs are pre-determined by strict algorithms (if x then y). This broad definition ensures the AI Act remains relevant and adaptable, moving away from a static list of technologies and adopting a technology-neutral and uniform approach.

The directive adopts a risk-based approach, categorizing AI systems into four tiers:

  • Unacceptable risk,
  • High risk,
  • Limited risk, and
  • Minimal risk. 

This approach ensures that AI systems deployed in the European Union are safe, transparent, and respectful of fundamental rights.

1. Unacceptable risk

AI systems that clearly threaten safety, livelihoods, and rights are prohibited. Examples include AI systems that manipulate human behavior or exploit vulnerabilities, such as AI-driven toys that encourage dangerous behavior in children or systems used by employers to exploit worker vulnerabilities based on their psychological state.

2. High risk

These AI systems are subject to stringent requirements due to their significant impact on people's lives. This category includes AI used in critical infrastructure (like energy and transport), healthcare (such as diagnostic tools), and employment processes (like AI systems used in hiring and performance evaluations).

High-risk systems must meet strict criteria for data quality, transparency, human oversight, and robustness. For example, an AI system used to screen job candidates must ensure that it does not discriminate against any particular group and that its decision-making process is transparent and understandable. Tools such as Syndio for pay analysis and IDEA (Intelligent Data-Driven Evaluation Analytics) for performance feedback ensure compliance with these requirements by identifying and addressing disparities across different demographics.

3. Limited risk

AI systems in this category require transparency obligations but are subject to less stringent controls than high-risk systems. Examples include AI chatbots that interact with customers. These systems must inform users that they are interacting with an AI and provide clear information on contacting a human operator if needed.

4. Minimal risk

These AI systems pose minimal or no risk and are not subject to specific regulatory requirements under the AI Act. Examples include AI-powered video games or spam filters. While not regulated, providers are encouraged to adopt voluntary codes of conduct to ensure ethical use.

With fines of up to 35 million euros or seven percent of global annual revenue, the AI Act has significant extraterritorial effects, impacting companies worldwide that operate within the EU market. Its provisions apply to AI developers and importers, distributors, and deployers, making it a critical concern for all legal departments. Notably, provisions related to prohibited AI systems will take effect after six months, while those concerning generative AI will be applicable after 12 months.

Impacts on DEI

The AI Act's potential to transform DEI initiatives within organizations is profound. Here are some key areas where the directive intersects with DEI objectives:

1. Bias detection and mitigation

One of the notable aspects of the AI Act is its emphasis on minimizing bias in AI systems. For in-house counsel and legal teams, this means ensuring that AI tools used within their organizations comply with the directive's requirements for data quality and bias mitigation. AI can be leveraged to detect and address biases in hiring practices, compensation structures, and performance evaluations. By analyzing large datasets, AI can identify patterns that suggest bias, prompting human oversight to correct these disparities. For example, AI can analyze the language used in performance reviews to flag potentially biased feedback. This ensures that managers know their biases and adjust their evaluations to be more objective and fair. Legal departments must ensure these AI tools comply with the AI Act to mitigate any legal risks associated with biased AI outputs.

2. Diverse candidate sourcing

AI-powered recruitment tools can significantly enhance the diversity of candidate pools. The AI Act's transparency requirements mean that AI systems must clearly communicate how they function, which helps remove unconscious biases from the hiring process. Tools like HireVue for recruitment, which anonymize resumes by stripping out personal information, can ensure hiring decisions are based on skills and experience rather than demographics. Legal teams must verify that these tools comply with the AI Act's stringent standards.

3. Inclusive communication and accessibility

The AI Act promotes the use of AI to foster inclusive communication. AI-driven tools for automatic translation, transcription, and summarization can break down language barriers and make communication more accessible to employees with disabilities. Legal departments should ensure that these tools are compliant with the AI Act and are used ethically within the organization.

4. Personalized learning and development

AI can provide personalized learning experiences that cater to the diverse needs of employees. AI-driven learning platforms can tailor training programs to individual learning styles and career development paths. By doing so, organizations can create a more inclusive environment where all employees have equal growth opportunities. Legal teams must ensure that these platforms adhere to the AI Act's transparency and data quality regulations.

Compliance and strategic planning

For in-house counsel and CLOs, navigating the compliance landscape of the AI Act is paramount. Here are a few steps to consider:

1. Conduct a comprehensive audit

Assess current AI systems for compliance with the AI Act's requirements. Identify high-risk systems and implement necessary changes to mitigate risks.

2. Develop transparent policies

Ensure that all AI-driven decision-making processes are transparent and that employees understand how these systems function.

3. Engage in continuous monitoring

Regularly audit AI systems for biases and discriminatory outcomes. Consider third-party audits to ensure unbiased assessments.

4. Stay informed

Keep abreast of emerging AI regulations and adapt compliance strategies accordingly. The AI Act is expected to be supplemented by additional EU legislation, particularly in areas such as employment and copyright.

The AI Act represents a significant regulatory milestone with far-reaching implications for DEI and the legal responsibilities of companies operating in the European Union. By fostering transparency, minimizing bias, and promoting inclusive practices, the directive offers a framework that supports the ethical deployment of AI.

For in-house counsel and legal departments, proactive compliance with the AI Act is not only a legal obligation but also an opportunity to advance DEI initiatives within their organizations. By leveraging new legal software for pay analysis, recruitment, and performance feedback, companies can ensure they meet the threshold set by the European Union while promoting a fair and inclusive workplace.

Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.