Search
5 resources
-
Mike Wu, Noah Goodman, Chris Piech|Oct 24th, 2021|journalArticleMike Wu, Noah Goodman, Chris PiechOct 24th, 2021
High-quality computer science education is limited by the difficulty of providing instructor feedback to students at scale. While this feedback could in principle be automated, supervised approaches to predicting the correct feedback are bottlenecked by the intractability of annotating large quantities of student code. In this paper, we instead frame the problem of providing feedback as few-shot classification, where a meta-learner adapts to give feedback to student code on a new programming...
-
Annenberg Institute at Brown...|Jun 3rd, 2023|reportAnnenberg Institute at Brown...Jun 3rd, 2023
Providing consistent, individualized feedback to teachers is essential for improving instruction but can be prohibitively resource-intensive in most educational contexts. We develop M-Powering Teachers, an automated tool based on natural language processing to give teachers feedback on their uptake of student contributions, a high-leverage dialogic teaching practice that makes students feel heard. We conduct a randomized controlled trial in an online computer science course (n=1,136...
-
Allen Nie, Yash Chandak, Miroslav Suzara...|Apr 25th, 2024|preprintAllen Nie, Yash Chandak, Miroslav Suzara...Apr 25th, 2024
Large language models (LLMs) are quickly being adopted in a wide range of learning experiences, especially via ubiquitous and broadly accessible chat interfaces like ChatGPT and Copilot. This type of interface is readily available to students and teachers around the world, yet relatively little research has been done to assess the impact of such generic tools on student learning. Coding education is an interesting test case, both because LLMs have strong performance on coding tasks, and...
-
Rishi Bommasani, Drew A. Hudson, Ehsan A...|Oct 24th, 2021|journalArticleRishi Bommasani, Drew A. Hudson, Ehsan A...Oct 24th, 2021
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical...
-
Rishi Bommasani, Drew A. Hudson, Ehsan A...|Jul 12th, 2022|preprintRishi Bommasani, Drew A. Hudson, Ehsan A...Jul 12th, 2022
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical...