Search
702 resources
-
Jurgen Rudolph, Samson Tan, Shannon Tan,...|Jan 25th, 2023|journalArticleJurgen Rudolph, Samson Tan, Shannon Tan,...Jan 25th, 2023
-
Tamara Tate, Shayan Doroudi, Daniel Ritc...|Jan 10th, 2023|preprintTamara Tate, Shayan Doroudi, Daniel Ritc...Jan 10th, 2023
The public release and surprising capacity of ChatGPT has brought AI-enabled text generation into the forefront for educators and academics. ChatGPT and similar text generation tools raise numerous questions for educational practitioners, policymakers, and researchers. We begin by first describing what large language models are and how they function, and then situate them in the history of technology’s complex interrelationship with literacy, cognition, and education. Finally, we discuss implications for the field.
-
Jason Wei, Xuezhi Wang, Dale Schuurmans,...|Jan 10th, 2023|preprintJason Wei, Xuezhi Wang, Dale Schuurmans,...Jan 10th, 2023
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of...
-
Jason Wei, Xuezhi Wang, Dale Schuurmans,...|Jan 10th, 2023|preprintJason Wei, Xuezhi Wang, Dale Schuurmans,...Jan 10th, 2023
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of...
-
Bodong Chen, Xinran Zhu, Fernando Díaz d...|Jan 1st, 2023|journalArticleBodong Chen, Xinran Zhu, Fernando Díaz d...Jan 1st, 2023
Generative artificial intelligence (GenAI) is penetrating in various social sectors, motivating a strong need for teaching AI literacy in younger generations. While substantial efforts have been made to teach AI literacy and to use AI to facilitate learning, few studies have provided empirical accounts of students' nuanced processes of using GenAI for learning. In this study, we engaged a group of high school students in leveraging ChatGPT to support their knowledge building efforts....
-
Peerawat Chomphooyod, Atiwong Suchato, N...|Jan 1st, 2023|journalArticlePeerawat Chomphooyod, Atiwong Suchato, N...Jan 1st, 2023
English grammar multiple-choice questions (MCQs) can be automatically generated to reduce preparation time. Previous studies have focused on semiautomated methods based on the transformation of human-made sentences/articles into MCQs, owing to which the number of generated questions is dependent on the size of a given text corpus. This study proposes an artificial intelligence-assisted MCQ generation system that increases the number of generable questions using controllable text generation...
-
Sahar Abdelnabi, Amr Gomaa, Sarath Sivap...|Oct 29th, 2023|preprintSahar Abdelnabi, Amr Gomaa, Sarath Sivap...Oct 29th, 2023
There is an growing interest in using Large Language Models (LLMs) in multi-agent systems to tackle interactive real-world tasks that require effective collaboration and assessing complex situations. Yet, we still have a limited understanding of LLMs' communication and decision-making abilities in multi-agent setups. The fundamental task of negotiation spans many key features of communication, such as cooperation, competition, and manipulation potentials. Thus, we propose using scorable...
-
Griffin Adams, Alexander Fabbri, Faisal ...|Oct 29th, 2023|preprintGriffin Adams, Alexander Fabbri, Faisal ...Oct 29th, 2023
Selecting the ``right'' amount of information to include in a summary is a difficult task. A good summary should be detailed and entity-centric without being overly dense and hard to follow. To better understand this tradeoff, we solicit increasingly dense GPT-4 summaries with what we refer to as a ``Chain of Density'' (CoD) prompt. Specifically, GPT-4 generates an initial entity-sparse summary before iteratively incorporating missing salient entities without increasing the length. Summaries...
-
Katarzyna Alexander, Christine Savvidou,...|Oct 29th, 2023|journalArticleKatarzyna Alexander, Christine Savvidou,...Oct 29th, 2023
-
Amos Azaria, Tom Mitchell|Oct 29th, 2023|preprintAmos Azaria, Tom MitchellOct 29th, 2023
While Large Language Models (LLMs) have shown exceptional performance in various tasks, one of their most prominent drawbacks is generating inaccurate or false information with a confident tone. In this paper, we provide evidence that the LLM's internal state can be used to reveal the truthfulness of statements. This includes both statements provided to the LLM, and statements that the LLM itself generates. Our approach is to train a classifier that outputs the probability that a statement...
-
Ayan Kumar Bhowmick, Ashish Jagmohan, Ad...|Oct 29th, 2023|bookSectionAyan Kumar Bhowmick, Ashish Jagmohan, Ad...Oct 29th, 2023
-
Melissa Bond, Hassan Khosravi, Maarten D...|Oct 29th, 2023|journalArticleMelissa Bond, Hassan Khosravi, Maarten D...Oct 29th, 2023
-
Jill Burstein, Kevin Yancey, Klinton Bic...|Oct 29th, 2023|documentJill Burstein, Kevin Yancey, Klinton Bic...Oct 29th, 2023
-
Rachel Van Campenhout, Michelle Clark, B...|Oct 29th, 2023|conferencePaperRachel Van Campenhout, Michelle Clark, B...Oct 29th, 2023
-
Eric C. K. Cheng, Tianchong Wang, Tim Sc...|Oct 29th, 2023|bookEric C. K. Cheng, Tianchong Wang, Tim Sc...Oct 29th, 2023
-
Eric C. K. Cheng, Tianchong Wang, Tim Sc...|Oct 29th, 2023|bookEric C. K. Cheng, Tianchong Wang, Tim Sc...Oct 29th, 2023
-
Peerawat Chomphooyod, Atiwong Suchato, N...|Oct 29th, 2023|journalArticlePeerawat Chomphooyod, Atiwong Suchato, N...Oct 29th, 2023
-
Wei Dai, Jionghao Lin, Hua Jin|Jul 29th, 2023|conferencePaperWei Dai, Jionghao Lin, Hua JinJul 29th, 2023
Educational feedback has been widely acknowledged as an effective approach to improving student learning. However, scaling effective practices can be laborious and costly, which motivated researchers to work on automated feedback systems (AFS). Inspired by the recent advancements in the pre-trained language models (e.g., ChatGPT), we posit that such models might advance the existing knowledge of textual feedback generation in AFS because of their capability to offer natural-sounding and...
-
Sabina Elkins, Ekaterina Kochmar, Jackie...|Oct 29th, 2023|preprintSabina Elkins, Ekaterina Kochmar, Jackie...Oct 29th, 2023
Controllable text generation (CTG) by large language models has a huge potential to transform education for teachers and students alike. Specifically, high quality and diverse question generation can dramatically reduce the load on teachers and improve the quality of their educational content. Recent work in this domain has made progress with generation, but fails to show that real teachers judge the generated questions as sufficiently useful for the classroom setting; or if instead the...
-
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang,...|Oct 29th, 2023|journalArticleJinlan Fu, See-Kiong Ng, Zhengbao Jiang,...Oct 29th, 2023
Generative Artificial Intelligence (AI) has enabled the development of sophisticated models that are capable of producing high-caliber text, images, and other outputs through the utilization of large pre-trained models. Nevertheless, assessing the quality of the generation is an even more arduous task than the generation itself, and this issue has not been given adequate consideration recently. This paper proposes a novel evaluation framework, GPTScore, which utilizes the emergent abilities...