ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)

Article Status
Published
Authors/contributors
Title
ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)
Abstract
Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.
Publication
PLOS ONE
Volume
18
Issue
8
Pages
e0290691
Date
2023-8-29
Journal Abbr
PLOS One
Language
en
ISSN
1932-6203
Accessed
08/10/2025, 23:08
Library Catalogue
DOI.org (Crossref)
Extra
Citation Key: cheung2023
Citation
Cheung, B. H. H., Lau, G. K. K., Wong, G. T. C., Lee, E. Y. P., Kulkarni, D., Seow, C. S., Wong, R., & Co, M. T.-H. (2023). ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom). PLOS ONE, 18(8), e0290691. https://doi.org/10.1371/journal.pone.0290691
Powered by Zotero and Kerko.