Strategic Data Ordering: Enhancing Large Language Model Performance through Curriculum Learning
Article Status
    Published
Authors/contributors
    - Kim, Jisu (Author)
- Lee, Juhwan (Author)
Title
    Strategic Data Ordering: Enhancing Large Language Model Performance through Curriculum Learning
Abstract
    The rapid advancement of Large Language Models (LLMs) has improved text understanding and generation but poses challenges in computational resources. This study proposes a curriculum learning-inspired, data-centric training strategy that begins with simpler tasks and progresses to more complex ones, using criteria such as prompt length, attention scores, and loss values to structure the training data. Experiments with Mistral-7B (Jiang et al., 2023) and Gemma-7B (Team et al., 2024) models demonstrate that curriculum learning slightly improves performance compared to traditional random data shuffling. Notably, we observed that sorting data based on our proposed attention criteria generally led to better performance. This approach offers a sustainable method to enhance LLM performance without increasing model size or dataset volume, addressing scalability challenges in LLM training.
Repository
    arXiv
Archive ID
    arXiv:2405.07490
Date
    2024-05-13
Accessed
    20/05/2025, 20:21
Short Title
    Strategic Data Ordering
Library Catalogue
    
Extra
    arXiv:2405.07490 [cs]
<标题>: 战略数据排序:通过课程学习提升大型语言模型性能
<AI Smry>: A curriculum learning-inspired, data-centric training strategy that begins with simpler tasks and progresses to more complex ones, using criteria such as prompt length, attention scores, and loss values to structure the training data.
Citation Key: kim2024a
Citation
    Kim, J., & Lee, J. (2024). Strategic Data Ordering: Enhancing Large Language Model Performance through Curriculum Learning (arXiv:2405.07490). arXiv. https://doi.org/10.48550/arXiv.2405.07490
        Link to this record