Faithfulness Tests for Natural Language Explanations

Pepa Atanasova, Oana Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, Isabelle Augenstein

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningpeer review

19 Citationer (Scopus)
22 Downloads (Pure)

Abstract

Explanations of neural models aim to reveal a model’s decision-making process for its predictions. However, recent work shows that current methods giving explanations such as saliency maps or counterfactuals can be misleading, as they are prone to present reasons that are unfaithful to the model’s inner workings. This work explores the challenging question of evaluating the faithfulness of natural language explanations (NLEs). To this end, we present two tests. First, we propose a counterfactual input editor for inserting reasons that lead to counterfactual predictions but are not reflected by the NLEs. Second, we reconstruct inputs from the reasons stated in the generated NLEs and check how often they lead to the same predictions. Our tests can evaluate emerging NLE models, proving a fundamental tool in the development of faithful NLEs.

OriginalsprogEngelsk
TitelProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Antal sider12
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2023
Sider283-294
ISBN (Elektronisk)9781959429715
DOI
StatusUdgivet - 2023
Begivenhed61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada
Varighed: 9 jul. 202314 jul. 2023

Konference

Konference61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
Land/OmrådeCanada
ByToronto
Periode09/07/202314/07/2023
SponsorBloomberg Engineering, et al., Google Research, Liveperson, Meta, Microsoft

Bibliografisk note

Funding Information:
The research documented in this paper has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. Isabelle Augenstein’s research is further partially funded by a DFF Sapere Aude research leader grant under grant agreement No 0171-00034B, as well as by the Pioneer Centre for AI, DNRF grant number P1. Thomas Lukasiewicz was supported by the Alan Turing Institute under the UK EPSRC grant EP/N510129/1, the AXA Research Fund, and the EU TAILOR grant 952215. Oana-Maria Camburu was supported by a UK Leverhulme Early Career Fellowship. Christina Lioma’s research is partially funded by the Villum and Velux Foundations Algorithms, Data and Democracy (ADD) grant.

Publisher Copyright:
© 2023 Association for Computational Linguistics.

Citationsformater