Credit and blame for AI-generated content: Effects of personalization in four countries

Brian D Earp, Sebastian Porsdam Mann, Peng Liu, Ivar Hannikainen, Maryam Ali Khan, Yueying Chu, Julian Savulescu

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

Abstract

Generative artificial intelligence (AI) raises ethical questions concerning moral and legal responsibility-specifically, the attributions of credit and blame for AI-generated content. For example, if a human invests minimal skill or effort to produce a beneficial output with an AI tool, can the human still take credit? How does the answer change if the AI has been personalized (i.e., fine-tuned) on previous outputs produced without AI assistance by the same human? We conducted a preregistered experiment with representative sampling (N = 1802) repeated in four countries (United States, United Kingdom, China, and Singapore). We investigated laypeople's attributions of credit and blame to human users for producing beneficial or harmful outputs with a standard large language model (LLM), a personalized LLM, or no AI assistance (control condition). Participants generally attributed more credit to human users of personalized versus standard LLMs for beneficial outputs, whereas LLM type did not significantly affect blame attributions for harmful outputs, with a partial exception among Chinese participants. In addition, UK participants attributed more blame for using any type of LLM versus no LLM. Practical, ethical, and policy implications of these findings are discussed.

OriginalsprogEngelsk
TidsskriftAnnals of the New York Academy of Sciences
ISSN0077-8923
DOI
StatusE-pub ahead of print - 25 nov. 2024

Bibliografisk note

© 2024 The Author(s). Annals of the New York Academy of Sciences published by Wiley Periodicals LLC on behalf of The New York Academy of Sciences.

Citationsformater