In authors or contributors

1 resource

  • Jisu Kim, Juhwan Lee
    |
    May 13th, 2024
    |
    preprint
    Jisu Kim, Juhwan Lee
    May 13th, 2024

    The rapid advancement of Large Language Models (LLMs) has improved text understanding and generation but poses challenges in computational resources. This study proposes a curriculum learning-inspired, data-centric training strategy that begins with simpler tasks and progresses to more complex ones, using criteria such as prompt length, attention scores, and loss values to structure the training data. Experiments with Mistral-7B (Jiang et al., 2023) and Gemma-7B (Team et al., 2024) models...

Last update from database: 31/10/2025, 17:15 (UTC)
Powered by Zotero and Kerko.