In authors or contributors

3 resources

  • Can Xu, Qingfeng Sun, Kai Zheng
    |
    Jun 10th, 2023
    |
    preprint
    Can Xu, Qingfeng Sun, Kai Zheng
    Jun 10th, 2023

    Training large language models (LLMs) with open-domain instruction following data brings colossal success. However, manually creating such instruction data is very time-consuming and labor-intensive. Moreover, humans may struggle to produce high-complexity instructions. In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans. Starting with an initial set of instructions, we use our proposed Evol-Instruct to...

  • Can Xu, Qingfeng Sun, Kai Zheng
    |
    Jun 10th, 2023
    |
    preprint
    Can Xu, Qingfeng Sun, Kai Zheng
    Jun 10th, 2023

    Training large language models (LLMs) with open-domain instruction following data brings colossal success. However, manually creating such instruction data is very time-consuming and labor-intensive. Moreover, humans may struggle to produce high-complexity instructions. In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans. Starting with an initial set of instructions, we use our proposed Evol-Instruct to...

  • Ziyang Luo, Can Xu, Pu Zhao
    |
    Jun 14th, 2023
    |
    preprint
    Ziyang Luo, Can Xu, Pu Zhao
    Jun 14th, 2023

    Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely...

Last update from database: 29/12/2024, 14:15 (UTC)
Powered by Zotero and Kerko.