214 resources

  • Sep 22nd, 2023
    |
    webpage
    Sep 22nd, 2023

    Open-source examples and guides for building with the OpenAI API.

  • Harika Abburi, Michael Suesserman, Nirma...
    |
    Sep 14th, 2023
    |
    preprint
    Harika Abburi, Michael Suesserman, Nirma...
    Sep 14th, 2023

    Large Language Models (LLMs) have shown impressive performance across a variety of Artificial Intelligence (AI) and natural language processing tasks, such as content creation, report generation, etc. However, unregulated malign application of these models can create undesirable consequences such as generation of fake news, plagiarism, etc. As a result, accurate detection of AI-generated language can be crucial in responsible usage of LLMs. In this work, we explore 1) whether a certain body...

  • Ted Zadouri, Ahmet Üstün, Arash Ahmadian...
    |
    Sep 11th, 2023
    |
    preprint
    Ted Zadouri, Ahmet Üstün, Arash Ahmadian...
    Sep 11th, 2023

    The Mixture of Experts (MoE) is a widely known neural architecture where an ensemble of specialized sub-models optimizes overall performance with a constant computational cost. However, conventional MoEs pose challenges at scale due to the need to store all experts in memory. In this paper, we push MoE to the limit. We propose extremely parameter-efficient MoE by uniquely combining MoE architecture with lightweight experts.Our MoE architecture outperforms standard parameter-efficient...

  • Jacob Steiss, Tamara Tate, Steve Graham,...
    |
    Sep 7th, 2023
    |
    preprint
    Jacob Steiss, Tamara Tate, Steve Graham,...
    Sep 7th, 2023

    Offering students formative feedback on drafts of their writing is an effective way to facilitate writing development. This study examined the ability of generative AI (i.e., ChatGPT) to provide formative feedback on students’ compositions. We compared the quality of human and AI feedback by scoring the feedback each provided on secondary student essays (n=200) on five measures of feedback quality: the degree to which feedback (a) was criteria-based, (b) provided clear directions for...

  • Ahmed M. Elkhatat, Khaled Elsaid, Saeed ...
    |
    Sep 1st, 2023
    |
    journalArticle
    Ahmed M. Elkhatat, Khaled Elsaid, Saeed ...
    Sep 1st, 2023

    The proliferation of artificial intelligence (AI)-generated content, particularly from models like ChatGPT, presents potential challenges to academic integrity and raises concerns about plagiarism. This study investigates the capabilities of various AI content detection tools in discerning human and AI-authored content. Fifteen paragraphs each from ChatGPT Models 3.5 and 4 on the topic of cooling towers in the engineering process and five human-witten control responses were generated for...

  • Billy Ho Hung Cheung, Gary Kui Kai Lau, ...
    |
    Aug 29th, 2023
    |
    journalArticle
    Billy Ho Hung Cheung, Gary Kui Kai Lau, ...
    Aug 29th, 2023

    Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.

  • Aug 28th, 2023
    |
    webpage
    Aug 28th, 2023
  • Aug 18th, 2023
    |
    preprint
    Aug 18th, 2023
  • Ibraheim Ayub, Dathan Hamann, Carsten R ...
    |
    Aug 18th, 2023
    |
    journalArticle
    Ibraheim Ayub, Dathan Hamann, Carsten R ...
    Aug 18th, 2023
  • Chi-Min Chan, Weize Chen, Yusheng Su
    |
    Aug 14th, 2023
    |
    preprint
    Chi-Min Chan, Weize Chen, Yusheng Su
    Aug 14th, 2023

    Text evaluation has historically posed significant challenges, often demanding substantial labor and time cost. With the emergence of large language models (LLMs), researchers have explored LLMs' potential as alternatives for human evaluation. While these single-agent-based approaches show promise, experimental results suggest that further advancements are needed to bridge the gap between their current effectiveness and human-level evaluation quality. Recognizing that best practices of human...

  • Chi-Min Chan, Weize Chen, Yusheng Su
    |
    Aug 14th, 2023
    |
    preprint
    Chi-Min Chan, Weize Chen, Yusheng Su
    Aug 14th, 2023

    Text evaluation has historically posed significant challenges, often demanding substantial labor and time cost. With the emergence of large language models (LLMs), researchers have explored LLMs' potential as alternatives for human evaluation. While these single-agent-based approaches show promise, experimental results suggest that further advancements are needed to bridge the gap between their current effectiveness and human-level evaluation quality. Recognizing that best practices of human...

  • Arran Hamilton, Dylan Wiliam, John Hatti...
    |
    Aug 13th, 2023
    |
    preprint
    Arran Hamilton, Dylan Wiliam, John Hatti...
    Aug 13th, 2023

    We may already be in the era of ‘peak humanity’, a time where we have the greatest levels of education, reasoning, rationality, and creativity – spread out amongst the greatest number of us. A brilliant result of the massification of universal basic education and the power of the university. But with the rapid advancement of Artificial Intelligence (AI) that can already replicate and even exceed many of our reasoning capabilities – there may soon be less incentive for us to learn and grow....

  • Aug 9th, 2023
    |
    blogPost
    Aug 9th, 2023

    Today the Biden-Harris Administration launched a two-year competition that uses AI to protect the U.S.'s most critical software.

  • Jeremy Roschelle
    |
    Aug 1st, 2023
    |
    webpage
    Jeremy Roschelle
    Aug 1st, 2023

    Computer scientists should inform the public that human tests are not a valid way to judge the quality of an AI model, nor a good way to compare AI models to human experts. Computer scientists should stop using human tests to measure AI.

  • Atsushi Mizumoto, Masaki Eguchi
    |
    Aug 1st, 2023
    |
    journalArticle
    Atsushi Mizumoto, Masaki Eguchi
    Aug 1st, 2023
  • Ajay Bandi, Pydi Venkata Satya Ramesh Ad...
    |
    Jul 31st, 2023
    |
    journalArticle
    Ajay Bandi, Pydi Venkata Satya Ramesh Ad...
    Jul 31st, 2023

    Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate the fundamental aspects of generative AI systems, including their requirements, models, input–output formats, and evaluation metrics. The study addresses key research questions and presents...

  • Paul Denny, Juho Leinonen, James Prather...
    |
    Jul 30th, 2023
    |
    preprint
    Paul Denny, Juho Leinonen, James Prather...
    Jul 30th, 2023

    With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading,...

  • Lydia Cao, Chris Dede
    |
    Jul 28th, 2023
    |
    document
    Lydia Cao, Chris Dede
    Jul 28th, 2023

    Understanding the nature of generative AI is crucial for educators to navigate the evolving landscape of teaching and learning. In a new report from the Next Level Lab, Lydia Cao and Chris Dede reflect on the role of generative AI in learning and how this pushes us to reconceptualize our visions of effective education. Though...Continue Reading Navigating A World of Generative AI: Suggestions for Educators

  • Nicole M. Hutchins, Gautam Biswas
    |
    Jul 25th, 2023
    |
    journalArticle
    Nicole M. Hutchins, Gautam Biswas
    Jul 25th, 2023

    This paper provides an experience report on a co‐design approach with teachers to co‐create learning analytics‐based technology to support problem‐based learning in middle school science classrooms. We have mapped out a workflow for such applications and developed design narratives to investigate the implementation, modifications and temporal roles of the participants in the design process. Our results provide precedent knowledge on co‐designing with experienced and novice teachers and...

  • Hugo Touvron, Louis Martin, Kevin Stone,...
    |
    Jul 19th, 2023
    |
    preprint
    Hugo Touvron, Louis Martin, Kevin Stone,...
    Jul 19th, 2023

    In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our...

Last update from database: 01/12/2025, 15:15 (UTC)
Powered by Zotero and Kerko.