TY - JOUR
T1 - Efficient Structured Prediction with Transformer Encoders
AU - Basirat, Ali
PY - 2024/3/14
Y1 - 2024/3/14
N2 - Finetuning is a useful method for adapting Transformer-based text encoders to new tasks but can be computationally expensive for structured prediction tasks that require tuning at the token level. Furthermore, finetuning is inherently inefficient in updating all base model parameters, which prevents parameter sharing across tasks. To address these issues, we propose a method for efficient task adaptation of frozen Transformer encoders based on the local contribution of their intermediate layers to token representations. Our adapter uses a novel attention mechanism to aggregate intermediate layers and tailor the resulting representations to a target task. Experiments on several structured prediction tasks demonstrate that our method outperforms previous approaches, retaining over 99% of the finetuning performance at a fraction of the training cost. Our proposed method offers an efficient solution for adapting frozen Transformer encoders to new tasks, improving performance and enabling parameter sharing across different tasks.
AB - Finetuning is a useful method for adapting Transformer-based text encoders to new tasks but can be computationally expensive for structured prediction tasks that require tuning at the token level. Furthermore, finetuning is inherently inefficient in updating all base model parameters, which prevents parameter sharing across tasks. To address these issues, we propose a method for efficient task adaptation of frozen Transformer encoders based on the local contribution of their intermediate layers to token representations. Our adapter uses a novel attention mechanism to aggregate intermediate layers and tailor the resulting representations to a target task. Experiments on several structured prediction tasks demonstrate that our method outperforms previous approaches, retaining over 99% of the finetuning performance at a fraction of the training cost. Our proposed method offers an efficient solution for adapting frozen Transformer encoders to new tasks, improving performance and enabling parameter sharing across different tasks.
KW - Faculty of Humanities
KW - large language models
KW - structured prediction
KW - Relation extraction
KW - Faculty of Science
KW - large language models
KW - deep learning
KW - finetuning
U2 - https://doi.org/10.3384/nejlt.2000-1533.2024.4932
DO - https://doi.org/10.3384/nejlt.2000-1533.2024.4932
M3 - Journal article
VL - 10
JO - The Northern European Journal of Language Technology (NEJLT)
JF - The Northern European Journal of Language Technology (NEJLT)
SN - 2000-1533
IS - 1
ER -