Should people have a right not to be subjected to AI profiling based on publicly available data? A comment on Ploug

Sune Holm*

*Corresponding author for this work

Research output: Contribution to journalComment/debateResearchpeer-review

1 Citation (Scopus)
25 Downloads (Pure)

Abstract

Several studies have documented that when presented with data from social media platforms machine learning (ML) models can make accurate predictions about users, e.g., about whether they are likely to suffer health-related conditions such as depression, mental disorders, and risk of suicide. In a recent article, Ploug (Philos Technol 36:14, 2023) defends a right not to be subjected to AI profiling based on publicly available data. In this comment, I raise some questions in relation to Ploug’s argument that I think deserves further discussion.

Original languageEnglish
Article number38
JournalPhilosophy and Technology
Volume36
Number of pages5
ISSN2210-5433
DOIs
Publication statusPublished - 2023

Bibliographical note

Publisher Copyright:
© 2023, The Author(s).

Keywords

  • AI profiling
  • Privacy
  • Public data
  • Rights

Cite this