Impact of LLM Feedback on Learner Persistence in Programming

Article Status
Published
Authors/contributors
Title
Impact of LLM Feedback on Learner Persistence in Programming
Abstract
This study examines how Large Language Model (LLM) feedback generated for compiler errors impacts learners’ persistence in programming tasks within a system for automated assessment of programming assignments. Persistence, the ability to maintain effort in the face of challenges, is crucial for academic success but can sometimes lead to unproductive "wheel spinning" when students struggle without progress. We investigated how additional LLM feedback based on the GPT-4 model, provided for compiler errors affects learners’ persistence within a CS1 course. Specifically, we examined whether its impacts differ based on task difficulty, and if the effects persist after the feedback is removed. A randomized controlled trial involving 257 students across various programming tasks was conducted. Our findings reveal that LLM feedback improved some aspects of students’ performance and persistence, such as increased scores, a higher likelihood of solving problems, and a lower tendency to demonstrate unproductive "wheel spinning" behavior. Notably, this positive impact was also observed in challenging tasks. However, its benefits did not sustain once the feedback was removed. The results highlight both the potential and limitations of LLM feedback, pointing out the need to promote long-term skill development and learning independent of immediate AI assistance.
Language
en
Library Catalogue
Zotero
Extra
Citation Key: zhou
Citation
Zhou, Y., Pankiewicz, M., Paquette, L., & Baker, R. S. (n.d.). Impact of LLM Feedback on Learner Persistence in Programming.
Powered by Zotero and Kerko.