Auditing and Mitigating Cultural Bias in LLMs
Article Status
Published
Authors/contributors
- Tao, Yan (Author)
- Viberg, Olga (Author)
- Baker, Ryan S. (Author)
- Kizilcec, Rene F. (Author)
Title
Auditing and Mitigating Cultural Bias in LLMs
Abstract
Culture fundamentally shapes people's reasoning, behavior, and communication. Generative artificial intelligence (AI) technologies may cause a shift towards a dominant culture. As people increasingly use AI to expedite and even automate various professional and personal tasks, cultural values embedded in AI models may bias authentic expression. We audit large language models for cultural bias, comparing their responses to nationally representative survey data, and evaluate country-specific prompting as a mitigation strategy. We find that GPT-4, 3.5 and 3 exhibit cultural values resembling English-speaking and Protestant European countries. Our mitigation strategy reduces cultural bias in recent models but not for all countries/territories. To avoid cultural bias in generative AI, especially in high-stakes contexts, we suggest using culture matching and ongoing cultural audits.
Date
2023
Accessed
27/11/2023, 16:11
Library Catalogue
DOI.org (Datacite)
Rights
Creative Commons Attribution 4.0 International
Extra
Publisher: arXiv
Version Number: 1
Citation Key: tao2023
Citation
Tao, Y., Viberg, O., Baker, R. S., & Kizilcec, R. F. (2023). Auditing and Mitigating Cultural Bias in LLMs. https://doi.org/10.48550/ARXIV.2311.14096
Link to this record