TY - JOUR
T1 - Credit and blame for AI-generated content
T2 - Effects of personalization in four countries
AU - Earp, Brian D
AU - Porsdam Mann, Sebastian
AU - Liu, Peng
AU - Hannikainen, Ivar
AU - Khan, Maryam Ali
AU - Chu, Yueying
AU - Savulescu, Julian
N1 - © 2024 The Author(s). Annals of the New York Academy of Sciences published by Wiley Periodicals LLC on behalf of The New York Academy of Sciences.
PY - 2024/11/25
Y1 - 2024/11/25
N2 - Generative artificial intelligence (AI) raises ethical questions concerning moral and legal responsibility-specifically, the attributions of credit and blame for AI-generated content. For example, if a human invests minimal skill or effort to produce a beneficial output with an AI tool, can the human still take credit? How does the answer change if the AI has been personalized (i.e., fine-tuned) on previous outputs produced without AI assistance by the same human? We conducted a preregistered experiment with representative sampling (N = 1802) repeated in four countries (United States, United Kingdom, China, and Singapore). We investigated laypeople's attributions of credit and blame to human users for producing beneficial or harmful outputs with a standard large language model (LLM), a personalized LLM, or no AI assistance (control condition). Participants generally attributed more credit to human users of personalized versus standard LLMs for beneficial outputs, whereas LLM type did not significantly affect blame attributions for harmful outputs, with a partial exception among Chinese participants. In addition, UK participants attributed more blame for using any type of LLM versus no LLM. Practical, ethical, and policy implications of these findings are discussed.
AB - Generative artificial intelligence (AI) raises ethical questions concerning moral and legal responsibility-specifically, the attributions of credit and blame for AI-generated content. For example, if a human invests minimal skill or effort to produce a beneficial output with an AI tool, can the human still take credit? How does the answer change if the AI has been personalized (i.e., fine-tuned) on previous outputs produced without AI assistance by the same human? We conducted a preregistered experiment with representative sampling (N = 1802) repeated in four countries (United States, United Kingdom, China, and Singapore). We investigated laypeople's attributions of credit and blame to human users for producing beneficial or harmful outputs with a standard large language model (LLM), a personalized LLM, or no AI assistance (control condition). Participants generally attributed more credit to human users of personalized versus standard LLMs for beneficial outputs, whereas LLM type did not significantly affect blame attributions for harmful outputs, with a partial exception among Chinese participants. In addition, UK participants attributed more blame for using any type of LLM versus no LLM. Practical, ethical, and policy implications of these findings are discussed.
U2 - 10.1111/nyas.15258
DO - 10.1111/nyas.15258
M3 - Journal article
C2 - 39585780
SN - 0077-8923
JO - Annals of the New York Academy of Sciences
JF - Annals of the New York Academy of Sciences
ER -