Outlier Dimensions that Disrupt Transformers are Driven by Frequency

Giovanni Puccetti, Anna Rogers, Aleksandr Drozd, Felice Dell'Orletta

Publikation: KonferencebidragPaperForskningpeer review

17 Citationer (Scopus)

Abstract

While Transformer-based language models are generally very robust to pruning, there is the recently discovered outlier phenomenon: disabling only 48 out of 110M parameters in BERT-base drops its performance by nearly 30% on MNLI. We replicate the original evidence for the outlier phenomenon and we link it to the geometry of the embedding space. We find that in both BERT and RoBERTa the magnitude of hidden state coefficients corresponding to outlier dimensions correlates with the frequency of encoded tokens in pre-training data, and it also contributes to the “vertical” self-attention pattern enabling the model to focus on the special tokens. This explains the drop in performance from disabling the outliers, and it suggests that to decrease anisotropicity in future models we need pre-training schemas that would better take into account the skewed token distributions.

OriginalsprogEngelsk
Publikationsdato2022
Antal sider19
StatusUdgivet - 2022
Begivenhed2022 Findings of the Association for Computational Linguistics: EMNLP 2022 - Abu Dhabi, United Arab Emirates
Varighed: 7 dec. 202211 dec. 2022

Konference

Konference2022 Findings of the Association for Computational Linguistics: EMNLP 2022
Land/OmrådeUnited Arab Emirates
ByAbu Dhabi
Periode07/12/202211/12/2022

Bibliografisk note

Funding Information:
We would like to thank Olga Kovaleva, Anna Rumshisky, and the anonymous reviewers for their insightful comments. This work is partially supported by JST KAKENHI grant JP22H03600 and JST CREST grant JPMJCR19F5. This work used computational resources of the supercomputer Fu-gaku provided by RIKEN through the HPCI Fu-gaku General Access (Small-Scale) Project (Project ID: hp210265).

Funding Information:
We would like to thank Olga Kovaleva, Anna Rumshisky, and the anonymous reviewers for their insightful comments. This work is partially supported by JST KAKENHI grant JP22H03600 and JST CREST grant JPMJCR19F5. This work used computational resources of the supercomputer Fugaku provided by RIKEN through the HPCI Fugaku General Access (Small-Scale) Project (Project ID: hp210265).

Publisher Copyright:
© 2022 Association for Computational Linguistics.

Citationsformater