Abstract
Chain-of-thought prompting has been proposed as a technique that would increase people's trust in instruction-tuned large language models such as ChatGPT-3.5 or LLaMA-2. We find, somewhat surprisingly, that while people prefer chain-of-thought explanations, such explanations only increase trust when they are not read, but decrease trust when they are read. Moreover, the question type is predictive of trust. Across these two psychological biases, much of people's trust in instruction-tuned large language models seems independent of the content of their responses.
Originalsprog | Engelsk |
---|---|
Titel | Social Robots with AI : Prospects, Risks, and Responsible Methods, Proceedings of Robophilosophy 2024 |
Redaktører | Johanna Seibt, Peter Fazekas, Oliver Santiago Quick |
Forlag | IOS Press BV |
Publikationsdato | 2025 |
Sider | 238-243 |
ISBN (Elektronisk) | 9781643685670 |
DOI | |
Status | Udgivet - 2025 |
Begivenhed | 6th Social Robots with AI: Prospects, Risks, and Responsible Methods Robophilosophy, RP 2024 - Hybrid, Aarhus, Danmark Varighed: 19 aug. 2024 → 23 aug. 2024 |
Konference
Konference | 6th Social Robots with AI: Prospects, Risks, and Responsible Methods Robophilosophy, RP 2024 |
---|---|
Land/Område | Danmark |
By | Hybrid, Aarhus |
Periode | 19/08/2024 → 23/08/2024 |
Navn | Frontiers in Artificial Intelligence and Applications |
---|---|
Vol/bind | 397 |
ISSN | 0922-6389 |
Bibliografisk note
Publisher Copyright:© 2025 IOS Press. All rights reserved.