A Primer in BERTology: What We Know About How BERT Works

Article Status
Published
Authors/contributors
Title
A Primer in BERTology: What We Know About How BERT Works
Abstract
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression. We then outline directions for future research.
Publication
Transactions of the Association for Computational Linguistics
Volume
8
Pages
842-866
Date
2020-12
Journal Abbr
Transactions of the Association for Computational Linguistics
Language
en
ISSN
2307-387X
Short Title
A Primer in BERTology
Accessed
16/11/2023, 16:22
Library Catalogue
DOI.org (Crossref)
Extra
Citation Key: rogers2020 <标题>: BERT入门:我们对BERT工作原理的了解 <AI Smry>: This paper is the first survey of over 150 studies of the popular BERT model, reviewing the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression.
Citation
Rogers, A., Kovaleva, O., & Rumshisky, A. (2020). A Primer in BERTology: What We Know About How BERT Works. Transactions of the Association for Computational Linguistics, 8, 842–866. https://doi.org/10.1162/tacl_a_00349
Powered by Zotero and Kerko.