Search
269 resources
-
Arjun Kharpal|May 24th, 2023|webpageArjun KharpalMay 24th, 2023
Artificial intelligence has been thrust into the center of conversations among policymakers grappling with what the tech should look like in the future.
-
Tim Dettmers, Artidoro Pagnoni, Ari Holt...|May 23rd, 2023|preprintTim Dettmers, Artidoro Pagnoni, Ari Holt...May 23rd, 2023
We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters~(LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while...
-
Yang Liu, Dan Iter, Yichong Xu|May 23rd, 2023|preprintYang Liu, Dan Iter, Yichong XuMay 23rd, 2023
The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references....
-
Reza Hadi Mogavi, Chao Deng, Justin Juho...|May 22nd, 2023|preprintReza Hadi Mogavi, Chao Deng, Justin Juho...May 22nd, 2023
Understanding user perspectives on Artificial Intelligence (AI) in education is essential for creating pedagogically effective and ethically responsible AI-integrated learning environments. In this paper, we conduct an extensive qualitative content analysis of four major social media platforms (Twitter, Reddit, YouTube, and LinkedIn) to explore the user experience (UX) and perspectives of early adopters toward ChatGPT-an AI Chatbot technology-in various education sectors. We investigate the...
-
Rylan Schaeffer, Brando Miranda, Sanmi K...|May 22nd, 2023|preprintRylan Schaeffer, Brando Miranda, Sanmi K...May 22nd, 2023
Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family,...
-
Noah Shinn, Federico Cassano, Beck Labas...|May 21st, 2023|preprintNoah Shinn, Federico Cassano, Beck Labas...May 21st, 2023
Large language models (LLMs) have been increasingly used to interact with external environments (e.g., games, compilers, APIs) as goal-driven agents. However, it remains challenging for these language agents to quickly and efficiently learn from trial-and-error as traditional reinforcement learning methods require extensive training samples and expensive model fine-tuning. We propose Reflexion, a novel framework to reinforce language agents not by updating weights, but instead through...
-
Noah Shinn, Federico Cassano, Beck Labas...|May 21st, 2023|preprintNoah Shinn, Federico Cassano, Beck Labas...May 21st, 2023
Large language models (LLMs) have been increasingly used to interact with external environments (e.g., games, compilers, APIs) as goal-driven agents. However, it remains challenging for these language agents to quickly and efficiently learn from trial-and-error as traditional reinforcement learning methods require extensive training samples and expensive model fine-tuning. We propose Reflexion, a novel framework to reinforce language agents not by updating weights, but instead through...
-
Rohan Anil, Andrew M. Dai, Orhan Firat|May 17th, 2023|preprintRohan Anil, Andrew M. Dai, Orhan FiratMay 17th, 2023
We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more...
-
TED|May 1st, 2023|videoRecordingTEDMay 1st, 2023
Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools -- including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher -- and demos some exciting new features for their educational chatbot, Khanmigo. If you love watching TED Talks like this one, become a...
-
Department of Education|May 28th, 2023|reportDepartment of EducationMay 28th, 2023
-
Apr 24th, 2023|webpageApr 24th, 2023
Customer support agents given access to a generative AI chatbot were 14% more productive, but those gains were much higher for lower-performing workers.
-
Helen Crompton, Diane Burke|Apr 24th, 2023|journalArticleHelen Crompton, Diane BurkeApr 24th, 2023
This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid...
-
Niharika Singh|Apr 21st, 2023|blogPostNiharika SinghApr 21st, 2023
The most advanced foundation models for AI are only partially open-source and are only available through commercial APIs. This restricts their use and limits research and customization. However, a project called RedPajama now aims to create leading, fully open-source models. The first step of this project, reproducing the LLaMA training dataset, has been completed. Open-source models have made significant progress recently, and AI is experiencing a moment similar to the Linux movement....
-
Apr 20th, 2023|webpageApr 20th, 2023
In schools and conferences across the country, everyone seems to be talking about AI (Artificial Intelligence) and ChatGPT. Teachers are using ChatGPT to create lesson plans. Students are using ChatGPT to do their homework. And bloggers are using ChatGPT to draft content (although we promise not this one!). We understand the opportunities these resources can
-
Wei Dai, Jionghao Lin, Flora Jin|Apr 13th, 2023|preprintWei Dai, Jionghao Lin, Flora JinApr 13th, 2023
Educational feedback has been widely acknowledged as an effective approach to improving student learning. However, scaling effective practices can be laborious and costly, which motivated researchers to work on automated feedback systems (AFS). Inspired by the recent advancements in the pre-trained language models (e.g., ChatGPT), we posit that such models might advance the existing knowledge of textual feedback generation in AFS because of their capability to offer natural-sounding and...
-
Wei Dai, Jionghao Lin, Flora Jin|Apr 13th, 2023|preprintWei Dai, Jionghao Lin, Flora JinApr 13th, 2023
Educational feedback has been widely acknowledged as an effective approach to improving student learning. However, scaling effective practices can be laborious and costly, which motivated researchers to work on automated feedback systems (AFS). Inspired by the recent advancements in the pre-trained language models (e.g., ChatGPT), we posit that such models might advance the existing knowledge of textual feedback generation in AFS because of their capability to offer natural-sounding and...
-
Sabina Elkins, Ekaterina Kochmar, Jackie...|Apr 13th, 2023|preprintSabina Elkins, Ekaterina Kochmar, Jackie...Apr 13th, 2023
Controllable text generation (CTG) by large language models has a huge potential to transform education for teachers and students alike. Specifically, high quality and diverse question generation can dramatically reduce the load on teachers and improve the quality of their educational content. Recent work in this domain has made progress with generation, but fails to show that real teachers judge the generated questions as sufficiently useful for the classroom setting; or if instead the...
-
Yi Zheng, Steven Nydick, Sijia Huang|Apr 12th, 2023|conferencePaperYi Zheng, Steven Nydick, Sijia HuangApr 12th, 2023
The recent surge of machine learning (ML) has impacted many disciplines, including educational and psychological measurement (hereafter shortened as measurement, “M”). The measurement literature has seen a rapid growth in studies that explore using ML methods to solve measurement problems. However, there exist gaps between the typical paradigm of ML and fundamental principles of measurement. The MxML project was created to explore how the measurement community might potentially redefine the...
-
Ameet Deshpande, Vishvak Murahari, Tanma...|Apr 11th, 2023|preprintAmeet Deshpande, Vishvak Murahari, Tanma...Apr 11th, 2023
Large language models (LLMs) have shown incredible capabilities and transcended the natural language processing (NLP) community, with adoption throughout many services like healthcare, therapy, education, and customer service. Since users include people with critical information needs like students or patients engaging with chatbots, the safety of these systems is of prime importance. Therefore, a clear understanding of the capabilities and limitations of LLMs is necessary. To this end, we...
-
Shibani Santurkar, Esin Durmus, Faisal L...|Mar 30th, 2023|preprintShibani Santurkar, Esin Durmus, Faisal L...Mar 30th, 2023
Language models (LMs) are increasingly being used in open-ended contexts, where the opinions reflected by LMs in response to subjective queries can have a profound impact, both on user satisfaction, as well as shaping the views of society at large. In this work, we put forth a quantitative framework to investigate the opinions reflected by LMs -- by leveraging high-quality public opinion polls and their associated human responses. Using this framework, we create OpinionsQA, a new dataset for...