ChatGPT is bullshit
Article Status
Published
Authors/contributors
- Hicks, Michael Townsen (Author)
- Humphries, James (Author)
- Slater, Joe (Author)
Title
ChatGPT is bullshit
Abstract
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Publication
Ethics and Information Technology
Volume
26
Issue
2
Pages
38
Date
2024-06-08
Journal Abbr
Ethics Inf. Technol.
Language
en
ISSN
1572-8439
Accessed
19/06/2024, 16:58
Library Catalogue
Springer Link
Extra
<AI Smry>: It is argued that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Citation
Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38. https://doi.org/10.1007/s10676-024-09775-5
Link to this record