Probing for Hyperbole in Pre-Trained Language Models

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningpeer review

15 Downloads (Pure)

Abstract

Hyperbole is a common figure of speech, which is under-explored in NLP research. In this study, we conduct edge and minimal description length (MDL) probing experiments on three pre-trained language models (PLMs) in an attempt to explore the extent to which hyperbolic information is encoded in these models. We use both word-in-context and sentence-level representations as model inputs as a basis for comparison. We also annotate 63 hyperbole sentences from the HYPO dataset according to an operational taxonomy to conduct an error analysis to explore the encoding of different hyperbole categories. Our results show that hyperbole is to a limited extent encoded in PLMs, and mostly in the final layers. They also indicate that hyperbolic information may be better encoded by the sentence-level representations, which, due to the pragmatic nature of hyperbole, may therefore provide a more accurate and informative representation in PLMs. Finally, the inter-annotator agreement for our annotations, a Cohen’s Kappa of 0.339, suggest that the taxonomy categories may not be intuitive and need revision or simplification.
OriginalsprogEngelsk
TitelProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop),
Antal sider12
ForlagAssociation for Computational Linguistics
Publikationsdato2023
Sider200–211
DOI
StatusUdgivet - 2023
Begivenhed61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada
Varighed: 9 jul. 202314 jul. 2023

Konference

Konference61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
Land/OmrådeCanada
ByToronto
Periode09/07/202314/07/2023
SponsorBloomberg Engineering, et al., Google Research, Liveperson, Meta, Microsoft

Citationsformater