Sailing the Seven Seas: A Multinational Comparison of ChatGPT’s Performance on Medical Licensing Examinations

Michael Alfertshofer, Cosima C. Hoch, Paul F. Funk, Katharina Hollmann, Barbara Wollenberg, Samuel Knoedler, Leonard Knoedler

Research output: Contribution to journalLetterpeer-review

18 Scopus citations

Abstract

Purpose: The use of AI-powered technology, particularly OpenAI’s ChatGPT, holds significant potential to reshape healthcare and medical education. Despite existing studies on the performance of ChatGPT in medical licensing examinations across different nations, a comprehensive, multinational analysis using rigorous methodology is currently lacking. Our study sought to address this gap by evaluating the performance of ChatGPT on six different national medical licensing exams and investigating the relationship between test question length and ChatGPT’s accuracy. Methods: We manually inputted a total of 1,800 test questions (300 each from US, Italian, French, Spanish, UK, and Indian medical licensing examination) into ChatGPT, and recorded the accuracy of its responses. Results: We found significant variance in ChatGPT’s test accuracy across different countries, with the highest accuracy seen in the Italian examination (73% correct answers) and the lowest in the French examination (22% correct answers). Interestingly, question length correlated with ChatGPT’s performance in the Italian and French state examinations only. In addition, the study revealed that questions requiring multiple correct answers, as seen in the French examination, posed a greater challenge to ChatGPT. Conclusion: Our findings underscore the need for future research to further delineate ChatGPT’s strengths and limitations in medical test-taking across additional countries and to develop guidelines to prevent AI-assisted cheating in medical examinations.

Original languageEnglish
Pages (from-to)1542-1545
Number of pages4
JournalAnnals of Biomedical Engineering
Volume52
Issue number6
DOIs
StatePublished - Jun 2024
Externally publishedYes

Keywords

  • Artificial intelligence
  • ChatGPT
  • Clinical decision-making
  • Medical education
  • Medical licensing exams
  • OpenAI

Fingerprint

Dive into the research topics of 'Sailing the Seven Seas: A Multinational Comparison of ChatGPT’s Performance on Medical Licensing Examinations'. Together they form a unique fingerprint.

Cite this