In authors or contributors

4 resources

  • Yerin Kwak, Zachary A. Pardos
    |
    May 8th, 2024
    |
    journalArticle
    Yerin Kwak, Zachary A. Pardos
    May 8th, 2024

    The adoption of large language models (LLMs) in education holds much promise. However, like many technological innovations before them, adoption and access can often be inequitable from the outset, creating more divides than they bridge. In this paper, we explore the magnitude of the country and language divide in the leading open‐source and proprietary LLMs with respect to knowledge of K‐12 taxonomies in a variety of countries and their performance on tagging problem content with the...

  • Zachary A. Pardos, Shreya Bhandari
    |
    Feb 14th, 2023
    |
    preprint
    Zachary A. Pardos, Shreya Bhandari
    Feb 14th, 2023

    Large Language Models (LLMs), such as ChatGPT, are quickly advancing AI to the frontiers of practical consumer use and leading industries to re-evaluate how they allocate resources for content production. Authoring of open educational resources and hint content within adaptive tutoring systems is labor intensive. Should LLMs like ChatGPT produce educational content on par with human-authored content, the implications would be significant for further scaling of computer tutoring system...

  • Yunting Liu, Shreya Bhandari, Zachary A....
    |
    May 22nd, 2025
    |
    journalArticle
    Yunting Liu, Shreya Bhandari, Zachary A....
    May 22nd, 2025

    Effective educational measurement relies heavily on the curation of well‐designed item pools. However, item calibration is time consuming and costly, requiring a sufficient number of respondents to estimate the psychometric properties of items. In this study, we explore the potential of six different large language models (LLMs; GPT‐3.5, GPT‐4, Llama 2, Llama 3, Gemini‐Pro and Cohere Command R Plus) to generate responses with psychometric properties comparable to those of human respondents....

  • Yunting Liu, Shreya Bhandari, Zachary A....
    |
    May 22nd, 2025
    |
    journalArticle
    Yunting Liu, Shreya Bhandari, Zachary A....
    May 22nd, 2025

    Effective educational measurement relies heavily on the curation of well‐designed item pools. However, item calibration is time consuming and costly, requiring a sufficient number of respondents to estimate the psychometric properties of items. In this study, we explore the potential of six different large language models (LLMs; GPT‐3.5, GPT‐4, Llama 2, Llama 3, Gemini‐Pro and Cohere Command R Plus) to generate responses with psychometric properties comparable to those of human respondents....

Last update from database: 22/10/2025, 00:15 (UTC)
Powered by Zotero and Kerko.