Abstract
Various metrics and interventions have been developed to identify and mitigate unfair outputs of machine learning systems. While individuals and organizations have an obligation to avoid discrimination, the use of fairness-aware machine learning interventions has also been described as amounting to 'algorithmic positive action' under European Union (EU) non-discrimination law. As the Court of Justice of the European Union has been strict when it comes to assessing the lawfulness of positive action, this would impose a significant legal burden on those wishing to implement fair-ml interventions. In this paper, we propose that algorithmic fairness interventions often should be interpreted as a means to prevent discrimination, rather than a measure of positive action. Specifically, we suggest that this category mistake can often be attributed to neutrality fallacies: faulty assumptions regarding the neutrality of (fairness-aware) algorithmic decision-making. Our findings raise the question of whether a negative obligation to refrain from discrimination is sufficient in the context of algorithmic decision-making. Consequently, we suggest moving away from a duty to 'not do harm' towards a positive obligation to actively 'do no harm' as a more adequate framework for algorithmic decision-making and fair ml-interventions.
Originalsprog | Engelsk |
---|---|
Titel | 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 |
Antal sider | 11 |
Forlag | Association for Computing Machinery, Inc. |
Publikationsdato | 2024 |
Sider | 2060-2070 |
ISBN (Elektronisk) | 9798400704505 |
DOI | |
Status | Udgivet - 2024 |
Begivenhed | 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 - Rio de Janeiro, Brasilien Varighed: 3 jun. 2024 → 6 jun. 2024 |
Konference
Konference | 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 |
---|---|
Land/Område | Brasilien |
By | Rio de Janeiro |
Periode | 03/06/2024 → 06/06/2024 |
Navn | 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 |
---|
Bibliografisk note
Funding Information:This project has received financial support from the CNRS through the MITI interdisciplinary programs through its exploratory research program. We would like to thank the Lorentz Center and its support team as well as the organizers and participants of the workshop Fairness in Algorithmic Decision Making: A Domain-Specific Approach for the stimulating discussions which brought about our interdisciplinary collaboration.
Publisher Copyright:
© 2024 Owner/Author.