In authors or contributors

1 resource

  • Long Ouyang, Jeff Wu, Xu Jiang
    |
    Mar 4th, 2022
    |
    preprint
    Long Ouyang, Jeff Wu, Xu Jiang
    Mar 4th, 2022

    Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through...

Last update from database: 28/12/2024, 07:15 (UTC)
Powered by Zotero and Kerko.