5 resources

  • Joy He-Yueya, Noah D. Goodman, Emma Brun...
    |
    May 6th, 2024
    |
    preprint
    Joy He-Yueya, Noah D. Goodman, Emma Brun...
    May 6th, 2024

    Creating effective educational materials generally requires expensive and time-consuming studies of student learning outcomes. To overcome this barrier, one idea is to build computational models of student learning and use them to optimize instructional materials. However, it is difficult to model the cognitive processes of learning dynamics. We propose an alternative approach that uses Language Models (LMs) as educational experts to assess the impact of various instructions on learning...

  • Mike Wu, Noah Goodman, Chris Piech
    |
    Oct 24th, 2021
    |
    journalArticle
    Mike Wu, Noah Goodman, Chris Piech
    Oct 24th, 2021

    High-quality computer science education is limited by the difficulty of providing instructor feedback to students at scale. While this feedback could in principle be automated, supervised approaches to predicting the correct feedback are bottlenecked by the intractability of annotating large quantities of student code. In this paper, we instead frame the problem of providing feedback as few-shot classification, where a meta-learner adapts to give feedback to student code on a new programming...

  • Joy He-Yueya, Wanjing Anya Ma, Kanishk G...
    |
    Jul 22nd, 2024
    |
    preprint
    Joy He-Yueya, Wanjing Anya Ma, Kanishk G...
    Jul 22nd, 2024

    Language models (LMs) are increasingly used to simulate human-like responses in scenarios where accurately mimicking a population's behavior can guide decision-making, such as in developing educational materials and designing public policies. The objective of these simulations is for LMs to capture the variations in human responses, rather than merely providing the expected correct answers. Prior work has shown that LMs often generate unrealistically accurate responses, but there are no...

  • Rishi Bommasani, Drew A. Hudson, Ehsan A...
    |
    Oct 24th, 2021
    |
    journalArticle
    Rishi Bommasani, Drew A. Hudson, Ehsan A...
    Oct 24th, 2021

    AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical...

  • Rishi Bommasani, Drew A. Hudson, Ehsan A...
    |
    Jul 12th, 2022
    |
    preprint
    Rishi Bommasani, Drew A. Hudson, Ehsan A...
    Jul 12th, 2022

    AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical...

Last update from database: 24/10/2025, 16:15 (UTC)
Powered by Zotero and Kerko.