Deduction under Perturbed Evidence: Probing Student Simulation (Knowledge Tracing) Capabilities of Large Language Models

Article Status
Published
Authors/contributors
Title
Deduction under Perturbed Evidence: Probing Student Simulation (Knowledge Tracing) Capabilities of Large Language Models
Proceedings Title
Proceedings of the Workshop on Empowering Education with LLMs - the Next-Gen Interface and Content Generation 2023
Conference Name
Empowering Education with LLMs - the Next-Gen Interface and Content Generation 2023
Publisher
CEUR
Place
Tokyo, Japan
Date
2023-07-07
Volume
3487
Pages
26-33
Series
CEUR Workshop Proceedings
Citation Key
sonkar2023
Accessed
13/10/2023, 23:51
ISSN
1613-0073
Short Title
Deduction under Perturbed Evidence
Language
en
Library Catalogue
CEUR Workshop Proceedings
Extra
<标题>: 在扰动证据下的推理:探讨大型语言模型的学生模拟(知识追踪)能力 Read_Status: New Read_Status_Date: 2026-01-26T11:33:34.519Z
Citation
Sonkar, S., & Baraniuk, R. G. (2023). Deduction under Perturbed Evidence: Probing Student Simulation (Knowledge Tracing) Capabilities of Large Language Models. In S. Moore, J. Stamper, R. Tong, C. Cao, Z. Liu, X. Hu, Y. Lu, J. Liang, H. Khosravi, P. Denny, A. Singh, & C. Brooks (Eds.), Proceedings of the Workshop on Empowering Education with LLMs - the Next-Gen Interface and Content Generation 2023 (Vol. 3487, pp. 26–33). CEUR. https://ceur-ws.org/Vol-3487/#short4
Powered by Zotero and Kerko.