Abstract
The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms. Simply put, the attention mechanisms learn to attend to specific parts of the input dispensing recurrence and convolutions. While some of the learned attention heads have been found to play linguistically interpretable roles, they can be redundant or prone to errors. We propose a method to guide the attention heads towards roles identified in prior work as important. We do this by defining role-specific masks to constrain the heads to attend to specific parts of the input, such that different heads are designed to play different roles. Experiments on text classification and machine translation using 7 different datasets show that our method outperforms competitive attention-based, CNN, and RNN baselines.
Original language | English |
---|---|
Title of host publication | Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II |
Editors | Djoerd Hiemstra, Marie-Francine Moens, Josiane Mothe, Raffaele Perego, Martin Potthast, Fabrizio Sebastiani |
Publisher | Springer |
Publication date | 2021 |
Pages | 432-439 |
ISBN (Print) | 9783030722395 |
DOIs | |
Publication status | Published - 2021 |
Event | 43rd European Conference on Information Retrieval, ECIR 2021 - Virtual, Online Duration: 28 Mar 2021 → 1 Apr 2021 |
Conference
Conference | 43rd European Conference on Information Retrieval, ECIR 2021 |
---|---|
City | Virtual, Online |
Period | 28/03/2021 → 01/04/2021 |
Series | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 12657 LNCS |
ISSN | 0302-9743 |
Bibliographical note
Publisher Copyright:© 2021, Springer Nature Switzerland AG.
Keywords
- Self-attention
- Text classification
- Transformer