TY - JOUR
T1 - ChatGPT’s Response Consistency
T2 - A Study on Repeated Queries of Medical Examination Questions
AU - Funk, Paul F.
AU - Hoch, Cosima C.
AU - Knoedler, Samuel
AU - Knoedler, Leonard
AU - Cotofana, Sebastian
AU - Sofo, Giuseppe
AU - Bashiri Dezfouli, Ali
AU - Wollenberg, Barbara
AU - Guntinas-Lichius, Orlando
AU - Alfertshofer, Michael
N1 - Publisher Copyright:
© 2024 by the authors.
PY - 2024/3
Y1 - 2024/3
N2 - (1) Background: As the field of artificial intelligence (AI) evolves, tools like ChatGPT are increasingly integrated into various domains of medicine, including medical education and research. Given the critical nature of medicine, it is of paramount importance that AI tools offer a high degree of reliability in the information they provide. (2) Methods: A total of n = 450 medical examination questions were manually entered into ChatGPT thrice, each for ChatGPT 3.5 and ChatGPT 4. The responses were collected, and their accuracy and consistency were statistically analyzed throughout the series of entries. (3) Results: ChatGPT 4 displayed a statistically significantly improved accuracy with 85.7% compared to that of 57.7% of ChatGPT 3.5 (p < 0.001). Furthermore, ChatGPT 4 was more consistent, correctly answering 77.8% across all rounds, a significant increase from the 44.9% observed from ChatGPT 3.5 (p < 0.001). (4) Conclusions: The findings underscore the increased accuracy and dependability of ChatGPT 4 in the context of medical education and potential clinical decision making. Nonetheless, the research emphasizes the indispensable nature of human-delivered healthcare and the vital role of continuous assessment in leveraging AI in medicine.
AB - (1) Background: As the field of artificial intelligence (AI) evolves, tools like ChatGPT are increasingly integrated into various domains of medicine, including medical education and research. Given the critical nature of medicine, it is of paramount importance that AI tools offer a high degree of reliability in the information they provide. (2) Methods: A total of n = 450 medical examination questions were manually entered into ChatGPT thrice, each for ChatGPT 3.5 and ChatGPT 4. The responses were collected, and their accuracy and consistency were statistically analyzed throughout the series of entries. (3) Results: ChatGPT 4 displayed a statistically significantly improved accuracy with 85.7% compared to that of 57.7% of ChatGPT 3.5 (p < 0.001). Furthermore, ChatGPT 4 was more consistent, correctly answering 77.8% across all rounds, a significant increase from the 44.9% observed from ChatGPT 3.5 (p < 0.001). (4) Conclusions: The findings underscore the increased accuracy and dependability of ChatGPT 4 in the context of medical education and potential clinical decision making. Nonetheless, the research emphasizes the indispensable nature of human-delivered healthcare and the vital role of continuous assessment in leveraging AI in medicine.
KW - ChatGPT
KW - artificial intelligence
KW - indecisiveness
KW - medical state examination questions
KW - response consistency
UR - http://www.scopus.com/inward/record.url?scp=85188798598&partnerID=8YFLogxK
U2 - 10.3390/ejihpe14030043
DO - 10.3390/ejihpe14030043
M3 - Article
AN - SCOPUS:85188798598
SN - 2174-8144
VL - 14
SP - 657
EP - 668
JO - European Journal of Investigation in Health, Psychology and Education
JF - European Journal of Investigation in Health, Psychology and Education
IS - 3
ER -