Deduction under Perturbed Evidence: Probing Student Simulation (Knowledge Tracing) Capabilities of Large Language Models

Article Status
Published
Authors/contributors
Title
Deduction under Perturbed Evidence: Probing Student Simulation (Knowledge Tracing) Capabilities of Large Language Models
Date
2023-07-07
Proceedings Title
Proceedings of the Workshop on Empowering Education with LLMs - the Next-Gen Interface and Content Generation 2023
Conference Name
Empowering Education with LLMs - the Next-Gen Interface and Content Generation 2023
Place
Tokyo, Japan
Publisher
CEUR
Volume
3487
Pages
26-33
Series
CEUR Workshop Proceedings
Language
en
Short Title
Deduction under Perturbed Evidence
Accessed
13/10/2023, 23:51
Library Catalogue
CEUR Workshop Proceedings
Extra
ISSN: 1613-0073
Citation
Sonkar, S., & Baraniuk, R. G. (2023). Deduction under Perturbed Evidence: Probing Student Simulation (Knowledge Tracing) Capabilities of Large Language Models. In S. Moore, J. Stamper, R. Tong, C. Cao, Z. Liu, X. Hu, Y. Lu, J. Liang, H. Khosravi, P. Denny, A. Singh, & C. Brooks (Eds.), Proceedings of the Workshop on Empowering Education with LLMs - the Next-Gen Interface and Content Generation 2023 (Vol. 3487, pp. 26–33). CEUR. https://ceur-ws.org/Vol-3487/#short4
Powered by Zotero and Kerko.