Exploring Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context Learning

Article Status
Published
Authors/contributors
Title
Exploring Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context Learning
Abstract
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable format in both assessments and practices. An important aspect of MCQs is the distractors, i.e., incorrect options that are designed to target specific misconceptions or insufficient knowledge among students. To date, the task of crafting high-quality distractors has largely remained a labor-intensive process for teachers and learning content designers, which has limited scalability. In this work, we explore the task of automated distractor and corresponding feedback message generation in math MCQs using large language models. We establish a formulation of these two tasks and propose a simple, in-context learning-based solution. Moreover, we explore using two non-standard metrics to evaluate the quality of the generated distractors and feedback messages. We conduct extensive experiments on these tasks using a real-world MCQ dataset that contains student response information. Our findings suggest that there is a lot of room for improvement in automated distractor and feedback generation. We also outline several directions for future work
Date
2023
Accessed
09/08/2023, 21:40
Library Catalogue
DOI.org (Datacite)
Rights
Creative Commons Attribution 4.0 International
Extra
Publisher: arXiv Version Number: 1 Citation Key: mcnichols2023 <标题>: 通过情境学习探索数学选择题的自动干扰项和反馈生成
Citation
McNichols, H., Feng, W., Lee, J., Scarlatos, A., Smith, D., Woodhead, S., & Lan, A. (2023). Exploring Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context Learning. https://doi.org/10.48550/ARXIV.2308.03234
Powered by Zotero and Kerko.