Search
702 resources
-
Shashank Sonkar, Richard G. Baraniuk, St...|Jul 7th, 2023|conferencePaperShashank Sonkar, Richard G. Baraniuk, St...Jul 7th, 2023
-
Pragnya Sridhar, Aidan Doyle, Arav Agarw...|Jul 7th, 2023|conferencePaperPragnya Sridhar, Aidan Doyle, Arav Agarw...Jul 7th, 2023
-
Gautam Yadav, Ying-Jui Tseng, Xiaolin Ni...|Jul 7th, 2023|conferencePaperGautam Yadav, Ying-Jui Tseng, Xiaolin Ni...Jul 7th, 2023
-
Sami Baral, Anthony Botelho, Abhishek Sa...|Jul 5th, 2023|journalArticleSami Baral, Anthony Botelho, Abhishek Sa...Jul 5th, 2023
Teachers often rely on the use of a range of open-ended problems to assess students' understanding of mathematical concepts. Beyond traditional conceptions of student open-ended work, commonly in the form of textual short-answer or essay responses, the use of figures, tables, number lines, graphs, and pictographs are other examples of open-ended work common in mathematics. While recent developments in areas of natural language processing and machine learning have led to automated methods to...
-
AIED|Jul 3rd, 2007|bookAIEDJul 3rd, 2007
-
AIED|Jul 3rd, 2007|bookAIEDJul 3rd, 2007
-
Tim Gorichanaz|Jul 3rd, 2023|journalArticleTim GorichanazJul 3rd, 2023
-
Weixin Liang, Mert Yuksekgonul, Yining M...|Jul 29th, 2023|journalArticleWeixin Liang, Mert Yuksekgonul, Yining M...Jul 29th, 2023
-
Rania Abdelghani, Yen-Hsiang Wang, Xingd...|Jun 30th, 2023|journalArticleRania Abdelghani, Yen-Hsiang Wang, Xingd...Jun 30th, 2023
In order to train children's ability to ask curiosity-driven questions, previous research has explored designing specific exercises relying on providing semantic and linguistic cues to help formulate such questions. But despite showing pedagogical efficiency, this method is still limited as it relies on generating the said cues by hand, which can be a very costly process. In this context, we propose to leverage advances in the natural language processing field (NLP) and investigate the...
-
Jun 28th, 2023|preprintJun 28th, 2023
-
Josh Dzieza|Jun 20th, 2023|webpageJosh DziezaJun 20th, 2023
How many humans does it take to make tech seem human? Millions.
-
Jun 19th, 2023|webpageJun 19th, 2023
If the brains behind Artificial Intelligence claim their creations can perform in tests like humans then surely those results should be assessed as if they were produced by humans. AQA's Head of Research and Development, Dr Cesare Aloisi, says this is vital to maintain trust in AI but fears that is not what is happening
-
Jun 19th, 2023|webpageJun 19th, 2023
A conversational AI system that listens, learns, and challenges
-
Raunak Chowdhuri, Neil Deshmukh, David D...|Jun 18th, 2023|webpageRaunak Chowdhuri, Neil Deshmukh, David D...Jun 18th, 2023
A new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team
-
Ziyang Luo, Can Xu, Pu Zhao|Jun 14th, 2023|preprintZiyang Luo, Can Xu, Pu ZhaoJun 14th, 2023
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely...
-
Jun 10th, 2023|blogPostJun 10th, 2023
-
Can Xu, Qingfeng Sun, Kai Zheng|Jun 10th, 2023|preprintCan Xu, Qingfeng Sun, Kai ZhengJun 10th, 2023
Training large language models (LLMs) with open-domain instruction following data brings colossal success. However, manually creating such instruction data is very time-consuming and labor-intensive. Moreover, humans may struggle to produce high-complexity instructions. In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans. Starting with an initial set of instructions, we use our proposed Evol-Instruct to...
-
EdArXiv|Jun 5th, 2023|reportEdArXivJun 5th, 2023
Algorithms and machine learning models are being used more frequently in educational settings, but there are concerns that they may discriminate against certain groups. While there is some research on algorithmic fairness, there are two main issues with the current research. Firstly, it often focuses on gender and race and ignores other groups. Secondly, studies often find algorithmic bias in educational models but don't explore ways to reduce it. This study evaluates three drop-out...
-
Annenberg Institute at Brown...|Jun 3rd, 2023|reportAnnenberg Institute at Brown...Jun 3rd, 2023
Providing consistent, individualized feedback to teachers is essential for improving instruction but can be prohibitively resource-intensive in most educational contexts. We develop M-Powering Teachers, an automated tool based on natural language processing to give teachers feedback on their uptake of student contributions, a high-leverage dialogic teaching practice that makes students feel heard. We conduct a randomized controlled trial in an online computer science course (n=1,136...
-
EdArXiv|Jun 2nd, 2023|reportEdArXivJun 2nd, 2023
Coaching, which involves classroom observation and expert feedback, is a widespread and fundamental part of teacher training. However, the majority of teachers do not have access to consistent, high quality coaching due to limited resources and access to expertise. We explore whether generative AI could become a cost-effective complement to expert feedback by serving as an automated teacher coach. In doing so, we propose three teacher coaching tasks for generative AI: (A) scoring transcript...