Bridging large language model disparities: Skill tagging of multilingual educational content

Article Status
Published
Authors/contributors
Title
Bridging large language model disparities: Skill tagging of multilingual educational content
Abstract
The adoption of large language models (LLMs) in education holds much promise. However, like many technological innovations before them, adoption and access can often be inequitable from the outset, creating more divides than they bridge. In this paper, we explore the magnitude of the country and language divide in the leading open‐source and proprietary LLMs with respect to knowledge of K‐12 taxonomies in a variety of countries and their performance on tagging problem content with the appropriate skill from a taxonomy, an important task for aligning open educational resources and tutoring content with state curricula. We also experiment with approaches to narrowing the performance divide by enhancing LLM skill tagging performance across four countries (the USA, Ireland, South Korea and India–Maharashtra) for more equitable outcomes. We observe considerable performance disparities not only with non‐English languages but with English and non‐US taxonomies. Our findings demonstrate that fine‐tuning GPT‐3.5 with a few labelled examples can improve its proficiency in tagging problems with relevant skills or standards, even for countries and languages that are underrepresented during training. Furthermore, the fine‐tuning results show the potential viability of GPT as a multilingual skill classifier. Using both an open‐source model, Llama2‐13B, and a closed‐source model, GPT‐3.5, we also observe large disparities in tagging performance between the two and find that fine‐tuning and skill information in the prompt improve both, but the closed‐source model improves to a much greater extent. Our study contributes to the first empirical results on mitigating disparities across countries and languages with LLMs in an educational context.
Publication
British Journal of Educational Technology
Volume
55
Issue
5
Pages
2039-2057
Date
2024-5-8
Journal Abbr
Brit. J. Educ. Technol.
Language
en
ISSN
0007-1013
Short Title
Bridging large language model disparities
Accessed
11/06/2025, 20:55
Library Catalogue
Wiley Online Library
Rights
© 2024 The Authors. British Journal of Educational Technology published by John Wiley & Sons Ltd on behalf of British Educational Research Association.
Extra
_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/bjet.13465 Citation Key: kwak2024 <标题>: 弥合大型语言模型差异:多语言教育内容的技能标记 <AI Smry>: The magnitude of the country and language divide in the leading open‐source and proprietary LLMs with respect to knowledge of K‐12 taxonomies in a variety of countries and their performance on tagging problem content with the appropriate skill from a taxonomy is explored, an important task for aligning open educational resources and tutoring content with state curricula.
Citation
Kwak, Y., & Pardos, Z. A. (2024). Bridging large language model disparities: Skill tagging of multilingual educational content. British Journal of Educational Technology, 55(5), 2039–2057. https://doi.org/10.1111/bjet.13465
Powered by Zotero and Kerko.