Core Concepts
The EU AI Act presents a potential link to merge the academic discourses on non-discrimination law and algorithmic fairness, aiming to address the enforcement problems faced by both legal and technical approaches.
Abstract
The content discusses the misalignment between non-discrimination law and algorithmic fairness, and how the EU AI Act could serve as a bridge between these two domains.
Key highlights:
- Non-discrimination law faces enforcement challenges, especially in the context of opaque AI systems, as individuals struggle to recognize and prove instances of discrimination.
- Algorithmic fairness approaches from computer science aim to implement fairness "by design", but face their own enforcement problems due to the normative nature of fairness and the reliance on self-governance.
- The AI Act explicitly aims to protect fundamental rights, including equality and non-discrimination, and establishes requirements for high-risk AI systems to prevent algorithmic discrimination.
- However, the AI Act leaves the judgment of what constitutes illegal discrimination to traditional non-discrimination law, requiring collaboration between legal and technical domains to "translate" legal requirements into technical fairness metrics.
- The AI Act also addresses the tension between fairness and privacy by allowing the processing of sensitive personal data for the purpose of bias detection and correction in high-risk AI systems.
- Practical challenges include defining appropriate fairness metrics and determining when "possible biases are likely to lead to discrimination", requiring guidance from regulators and policymakers.
Quotes
"The AI Act explicitly aims to protect the fundamental rights set out in Art. 2 of the Treaty of the European Union. Among these rights are equality and non-discrimination in particular."
"The AI Act therefore leaves the judgment call about what constitutes illegal discrimination to traditional non-discrimination law."
"Art. 10(5) AI Act states that '[t]o the extent that it is strictly necessary for the purposes of ensuring bias detection and correction in relation to the high-risk AI systems [...], the providers of such systems may exceptionally process special categories of personal data referred to in Art. 9(1) [GDPR].'"