Search
702 resources
-
Sami Baral, Anthony F. Botelho, John A. ...|Dec 16th, 2021|conferencePaperSami Baral, Anthony F. Botelho, John A. ...Dec 16th, 2021
-
Randy Elliot Bennett, Mo Zhang, Sandip S...|Dec 16th, 2021|journalArticleRandy Elliot Bennett, Mo Zhang, Sandip S...Dec 16th, 2021
This study examined differences in the composition processes used by educationally at-risk males and females who wrote essays as part of a high-school equivalency examination. Over 30,000 individuals were assessed, each taking one of 12 forms of the examination’s language arts writing subtest in 23 US states. Writing processes were inferred using features extracted from keystroke logs and aggregated into seven composite indicators. Results showed that females earned higher essay and total...
-
Guembe Blessing, Ambrose Azeta, Sanjay M...|Dec 16th, 2021|bookSectionGuembe Blessing, Ambrose Azeta, Sanjay M...Dec 16th, 2021
-
Rishi Bommasani, Drew A. Hudson, Ehsan A...|Dec 16th, 2021|journalArticleRishi Bommasani, Drew A. Hudson, Ehsan A...Dec 16th, 2021
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical...
-
Elizabeth Clark, Tal August, Sofia Serra...|Dec 16th, 2021|preprintElizabeth Clark, Tal August, Sofia Serra...Dec 16th, 2021
Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run a study assessing non-experts' ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes). We find that, without training, evaluators distinguished between GPT3- and human-authored text at random chance level. We explore...
-
Elizabeth Clark, Tal August, Sofia Serra...|Dec 16th, 2021|preprintElizabeth Clark, Tal August, Sofia Serra...Dec 16th, 2021
Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run a study assessing non-experts' ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes). We find that, without training, evaluators distinguished between GPT3- and human-authored text at random chance level. We explore...
-
Saad Khan, Jesse Hamer, Tiago Almeida|Dec 16th, 2021|conferencePaperSaad Khan, Jesse Hamer, Tiago AlmeidaDec 16th, 2021
We present Generate, a AI-human hybrid system to help education content creators interactively generate assessment content in an efficient and scalable manner. Our system integrates advanced natural language generation (NLG) approaches with subject matter expertise of assessment developers to efficiently generate a large number of highly customized and valid assessment items. We utilize the powerful Transformer architecture which is capable of leveraging substantive pretraining on several...
-
Vivekanandan S. Kumar, David Boulanger|Sep 16th, 2021|journalArticleVivekanandan S. Kumar, David BoulangerSep 16th, 2021
-
Michael McTear, Michael McTear|Dec 16th, 2021|bookSectionMichael McTear, Michael McTearDec 16th, 2021
-
Gabriel Oliveira dos Santos, Esther Luna...|Dec 16th, 2021|preprintGabriel Oliveira dos Santos, Esther Luna...Dec 16th, 2021
This paper shows that CIDEr-D, a traditional evaluation metric for image description, does not work properly on datasets where the number of words in the sentence is significantly greater than those in the MS COCO Captions dataset. We also show that CIDEr-D has performance hampered by the lack of multiple reference sentences and high variance of sentence length. To bypass this problem, we introduce CIDEr-R, which improves CIDEr-D, making it more flexible in dealing with datasets with high...
-
University of Wolverhampton, UK, Hadeel ...|Dec 16th, 2021|conferencePaperUniversity of Wolverhampton, UK, Hadeel ...Dec 16th, 2021
-
Masaki Uto|Jul 16th, 2021|journalArticleMasaki UtoJul 16th, 2021
Abstract Automated essay scoring (AES) is the task of automatically assigning scores to essays as an alternative to grading by humans. Although traditional AES models typically rely on manually designed features, deep neural network (DNN)-based AES models that obviate the need for feature engineering have recently attracted increased attention. Various DNN-AES models with different characteristics have been proposed over the past few years. To our knowledge, however, no study has...
-
Cong Wang, Xiufeng Liu, Lei Wang|Apr 16th, 2021|journalArticleCong Wang, Xiufeng Liu, Lei WangApr 16th, 2021
-
Tianqi Wang, Hiroaki Funayama, Hiroki Ou...|Dec 16th, 2021|journalArticleTianqi Wang, Hiroaki Funayama, Hiroki Ou...Dec 16th, 2021
-
Zichao Wang, Andrew Lan, Richard Baraniu...|Dec 16th, 2021|conferencePaperZichao Wang, Andrew Lan, Richard Baraniu...Dec 16th, 2021
-
Kim Christopher Williamson, René F. Kizi...|Dec 16th, 2021|conferencePaperKim Christopher Williamson, René F. Kizi...Dec 16th, 2021
-
Mike Wu, Noah Goodman, Chris Piech|Dec 16th, 2021|journalArticleMike Wu, Noah Goodman, Chris PiechDec 16th, 2021
High-quality computer science education is limited by the difficulty of providing instructor feedback to students at scale. While this feedback could in principle be automated, supervised approaches to predicting the correct feedback are bottlenecked by the intractability of annotating large quantities of student code. In this paper, we instead frame the problem of providing feedback as few-shot classification, where a meta-learner adapts to give feedback to student code on a new programming...
-
Khensani Xivuri, Hossana Twinomurinzi, D...|Dec 16th, 2021|bookSectionKhensani Xivuri, Hossana Twinomurinzi, D...Dec 16th, 2021
-
Jing Xu, Da Ju, Margaret Li|Dec 16th, 2021|conferencePaperJing Xu, Da Ju, Margaret LiDec 16th, 2021
-
Goh Ying Yingsoon, Niaz Chowdhury, Ganes...|Dec 16th, 2021|bookSectionGoh Ying Yingsoon, Niaz Chowdhury, Ganes...Dec 16th, 2021
The teaching of Chinese as a foreign language can be supported by using AI technology. Traditionally, the non-native learners can only interact with the instructors and depend on them solely for speaking practices. However, with the advancement of AI technology, the learners can use AI technology for interactive speaking skill development. In this study, the learners were instructed to download an application at https://m.wandoujia.com/apps/6790950. The process on the preparation of this AI...