Contextual evaluation of LLM’s performance on primary education science learning contents in the Yoruba language
Article Status
Published
Authors/contributors
- Lawal, Olanrewaju (Author)
- Soronnadi, Anthony (Author)
- Adekanmbi, Olubayo (Author)
Title
Contextual evaluation of LLM’s performance on primary education science learning contents in the Yoruba language
Abstract
In the rapidly evolving era of artificial intelligence, Large Language Models (LLMs) like ChatGPT-3.5, Llama, and PaLM 2 play a pivotal role in reshaping education. Trained on diverse language data with a predominant focus on English, these models exhibit remarkable proficiency in comprehending and generating intricate human language constructs, revolutionizing educational applications. This potential has prompted exploration into personalized and enriched educational experiences, streamlining instructional design to cater to students’ needs. However, the inclusive effectiveness of LLMs in low-resource languages, like Yoruba, poses challenges. This research critically assesses the ability of LLMs, including ChatGPT-3.5, Gemini, and PaLM 2, to comprehend and generate contextually relevant science education content in Yoruba. The study, conducted across four tasks using a manually developed primary science dataset in Yoruba, reveals a comparative underperformance in various NLP tasks, emphasizing the need for language-specific and domain-specific technologies, particularly for primary science education in low-resource languages.
Date
April, 2024
Language
en
Library Catalogue
Zotero
Citation
Lawal, O., Soronnadi, A., & Adekanmbi, O. (2024, April). Contextual evaluation of LLM’s performance on primary education science learning contents in the Yoruba language. https://openreview.net/forum?id=NCEnKXztVN
Link to this record