Abstract
Background
This study assessed the reliability of ChatGPT as a source of information on asthma, given the increasing use of artificial intelligence–driven models for medical information. Prior concerns about misinformation on atopic diseases in various digital platforms underline the importance of this evaluation.
Objective
We aimed to evaluate the scientific reliability of ChatGPT as a source of information on asthma.
Methods
The study involved analyzing ChatGPT’s responses to 26 asthma-related questions, each followed by a follow-up question. These encompassed definition/risk factors, diagnosis, treatment, lifestyle factors, and specific clinical inquiries. Medical professionals specialized in allergic and respiratory diseases independently assessed the responses using a 1-to-5 accuracy scale.
Results
Approximately 81% of the responses scored 4 or higher, suggesting a generally high accuracy level. However, 5 responses scored >3, indicating minor potentially harmful inaccuracies. The overall median score was 4. Fleiss multirater kappa value showed moderate agreement among raters.
Conclusion
ChatGPT generally provides reliable asthma-related information, but its limitations, such as lack of depth in certain responses and inability to cite sources or update in real time, were noted. It shows promise as an educational tool, but it should not be a substitute for professional medical advice. Future studies should explore its applicability for different user demographics and compare it with newer artificial intelligence models.
This study assessed the reliability of ChatGPT as a source of information on asthma, given the increasing use of artificial intelligence–driven models for medical information. Prior concerns about misinformation on atopic diseases in various digital platforms underline the importance of this evaluation.
Objective
We aimed to evaluate the scientific reliability of ChatGPT as a source of information on asthma.
Methods
The study involved analyzing ChatGPT’s responses to 26 asthma-related questions, each followed by a follow-up question. These encompassed definition/risk factors, diagnosis, treatment, lifestyle factors, and specific clinical inquiries. Medical professionals specialized in allergic and respiratory diseases independently assessed the responses using a 1-to-5 accuracy scale.
Results
Approximately 81% of the responses scored 4 or higher, suggesting a generally high accuracy level. However, 5 responses scored >3, indicating minor potentially harmful inaccuracies. The overall median score was 4. Fleiss multirater kappa value showed moderate agreement among raters.
Conclusion
ChatGPT generally provides reliable asthma-related information, but its limitations, such as lack of depth in certain responses and inability to cite sources or update in real time, were noted. It shows promise as an educational tool, but it should not be a substitute for professional medical advice. Future studies should explore its applicability for different user demographics and compare it with newer artificial intelligence models.
Originalsprog | Engelsk |
---|---|
Artikelnummer | 100330 |
Tidsskrift | Journal of Allergy and Clinical Immunology: Global |
Vol/bind | 3 |
Udgave nummer | 4 |
Antal sider | 3 |
DOI | |
Status | Udgivet - 2024 |
Bibliografisk note
Publisher Copyright:© 2024 The Author(s)