Abstract
What information should and can be transparent for artificial intelligence (AI) algorithms? This article examines the socio- technical and legal perspectives of transparency in relation to algorithmic decision-making in public administration. We show how transparency in AI can be understood in light of the various technologies and the challenges one may encounter. Despite some first steps in that direction, there exists so far no mature standard for documenting AI models. From a legal perspective, this article examined the applicable freedom of information (FOI) regimes across different jurisdictions, with a particular focus on Denmark and other Scandinavian countries. Despite notable differences, our findings show that the FOI regimes generally only grant access to existing documents, and that access can be denied on the basis of the wide proprietary interests and internal documents exemptions. This is why we ultimately conclude that the European data-protection framework and the proposed EU AI Act -with their far-reaching duties to document the functioning of AI systems -provide promising new avenues for research and insights into transparency in AI.
Originalsprog | Engelsk |
---|---|
Artikelnummer | 8 |
Tidsskrift | Digital Government: Research and Practice |
Vol/bind | 5 |
Udgave nummer | 1 |
Antal sider | 15 |
ISSN | 2639-0175 |
DOI | |
Status | Udgivet - 12 mar. 2024 |
Bibliografisk note
Funding Information:Research for this article was conducted as part of the research project ‘PACTA — Public Administration and Computational Transparency in Algorithms’ (grant no. 8091-00025B), which is headed by Henrik Palmer Olsen and funded by the Independent Research Fund Denmark.
Publisher Copyright:
Copyright © 2024 held by the owner/author(s).