In authors or contributors
Publication year

5 resources

  • Joy He-Yueya, Noah D. Goodman, Emma Brun...
    |
    May 6th, 2024
    |
    preprint
    Joy He-Yueya, Noah D. Goodman, Emma Brun...
    May 6th, 2024

    Creating effective educational materials generally requires expensive and time-consuming studies of student learning outcomes. To overcome this barrier, one idea is to build computational models of student learning and use them to optimize instructional materials. However, it is difficult to model the cognitive processes of learning dynamics. We propose an alternative approach that uses Language Models (LMs) as educational experts to assess the impact of various instructions on learning...

  • Joy He-Yueya, Wanjing Anya Ma, Kanishk G...
    |
    Jul 22nd, 2024
    |
    preprint
    Joy He-Yueya, Wanjing Anya Ma, Kanishk G...
    Jul 22nd, 2024

    Language models (LMs) are increasingly used to simulate human-like responses in scenarios where accurately mimicking a population's behavior can guide decision-making, such as in developing educational materials and designing public policies. The objective of these simulations is for LMs to capture the variations in human responses, rather than merely providing the expected correct answers. Prior work has shown that LMs often generate unrealistically accurate responses, but there are no...

  • Allen Nie, Yash Chandak, Miroslav Suzara...
    |
    Apr 25th, 2024
    |
    preprint
    Allen Nie, Yash Chandak, Miroslav Suzara...
    Apr 25th, 2024

    Large language models (LLMs) are quickly being adopted in a wide range of learning experiences, especially via ubiquitous and broadly accessible chat interfaces like ChatGPT and Copilot. This type of interface is readily available to students and teachers around the world, yet relatively little research has been done to assess the impact of such generic tools on student learning. Coding education is an interesting test case, both because LLMs have strong performance on coding tasks, and...

  • Rishi Bommasani, Drew A. Hudson, Ehsan A...
    |
    Dec 14th, 2021
    |
    journalArticle
    Rishi Bommasani, Drew A. Hudson, Ehsan A...
    Dec 14th, 2021

    AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical...

  • Rishi Bommasani, Drew A. Hudson, Ehsan A...
    |
    Jul 12th, 2022
    |
    preprint
    Rishi Bommasani, Drew A. Hudson, Ehsan A...
    Jul 12th, 2022

    AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical...

Last update from database: 14/12/2025, 20:15 (UTC)
Powered by Zotero and Kerko.