LLM Based Math Tutoring: Challenges and Dataset

Article Status
Published
Authors/contributors
Title
LLM Based Math Tutoring: Challenges and Dataset
Abstract
Large Language Models (LLMs) face documented challenges in solving mathematical problems. While substantial work has been done to quantify and improve LLMs’ abilities to solve static math problems, evaluating their performance in real-time math tutoring scenarios presents distinct challenges that remain underexplored. This paper specifically addresses the accuracy of LLMs in performing math correctly while tutoring students. It highlights the unique difficulties of this context, classifies types of interactions students may have with an LLM, presents a dataset, Conversation-Based Math Tutoring Accuracy Dataset (CoMTA Dataset), for evaluating the mathematical accuracy in tutoring scenarios, and discusses techniques to address these issues. Additionally, it evaluates the mathematical accuracy of a range of models in LLM-based tutoring.
Repository
OSF
Date
2024-7-3
Accessed
17/10/2024, 19:31
Short Title
LLM Based Math Tutoring
Language
en-us
Library Catalogue
OSF Preprints
Citation
Miller, P., & DiCerbo, K. (2024). LLM Based Math Tutoring: Challenges and Dataset. OSF. https://doi.org/10.35542/osf.io/5zwv3
Empirical studies
Powered by Zotero and Kerko.