Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT

Article Status
Published
Authors/contributors
Title
Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT
Abstract
Educational feedback has been widely acknowledged as an effective approach to improving student learning. However, scaling effective practices can be laborious and costly, which motivated researchers to work on automated feedback systems (AFS). Inspired by the recent advancements in the pre-trained language models (e.g., ChatGPT), we posit that such models might advance the existing knowledge of textual feedback generation in AFS because of their capability to offer natural-sounding and detailed responses. Therefore, we aimed to investigate the feasibility of using ChatGPT to provide students with feedback to help them learn better. Our results show that i) ChatGPT is capable of generating more detailed feedback that fluently and coherently summarizes students’ performance than human instructors; ii) ChatGPT achieved high agreement with the instructor when assessing the topic of students’ assignments; and iii) ChatGPT could provide feedback on the process of students completing the task, which might benefit students developing learning skills.
Proceedings Title
2023 IEEE International Conference on Advanced Learning Technologies (ICALT)
Conference Name
2023 IEEE International Conference on Advanced Learning Technologies (ICALT)
Publisher
IEEE
Place
Orem, UT, USA
Date
7/2023
Pages
323-325
ISBN
979-8-3503-0054-3
Citation Key
dai2023a
Accessed
15/10/2025, 21:35
Short Title
Can Large Language Models Provide Feedback to Students?
Language
en
Library Catalogue
DOI.org (Crossref)
Extra
Read_Status: New Read_Status_Date: 2026-01-26T11:33:25.498Z
Citation
Dai, W., Lin, J., Jin, H., Li, T., Tsai, Y.-S., Gašević, D., & Chen, G. (2023). Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT. 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), 323–325. https://doi.org/10.1109/ICALT58122.2023.00100
Powered by Zotero and Kerko.