In authors or contributors

2 resources

  • Tyna Eloundou, Sam Manning, Pamela Mishk...
    |
    Mar 21st, 2023
    |
    preprint
    Tyna Eloundou, Sam Manning, Pamela Mishk...
    Mar 21st, 2023

    We investigate the potential implications of Generative Pre-trained Transformer (GPT) models and related technologies on the U.S. labor market. Using a new rubric, we assess occupations based on their correspondence with GPT capabilities, incorporating both human expertise and classifications from GPT-4. Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at...

  • Long Ouyang, Jeff Wu, Xu Jiang
    |
    Mar 4th, 2022
    |
    preprint
    Long Ouyang, Jeff Wu, Xu Jiang
    Mar 4th, 2022

    Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through...

Last update from database: 28/12/2024, 07:15 (UTC)
Powered by Zotero and Kerko.