2 resources

  • Jasmin Wachter, Michael Radloff, Maja Sm...
    |
    Mar 17th, 2025
    |
    preprint
    Jasmin Wachter, Michael Radloff, Maja Sm...
    Mar 17th, 2025

    We introduce an Item Response Theory (IRT)-based framework to detect and quantify socioeconomic bias in large language models (LLMs) without relying on subjective human judgments. Unlike traditional methods, IRT accounts for item difficulty, improving ideological bias estimation. We fine-tune two LLM families (Meta-LLaMa 3.2-1B-Instruct and Chat- GPT 3.5) to represent distinct ideological positions and introduce a two-stage approach: (1) modeling response avoidance and (2) estimating...

  • Eirini Ntoutsi, Pavlos Fafalios, Ujwal G...
    |
    Feb 3rd, 2020
    |
    journalArticle
    Eirini Ntoutsi, Pavlos Fafalios, Ujwal G...
    Feb 3rd, 2020

    Artificial Intelligence (AI)‐based systems are widely employed nowadays to make decisions that have far‐reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still...

Last update from database: 27/10/2025, 18:15 (UTC)
Powered by Zotero and Kerko.