Finding NEM-U: Explaining unsupervised representation learning through neural network generated explanation masks

Bjørn Leth Møller*, Christian Igel, Kristoffer Knutsen Wickstrøm, Jon Sporring, Robert Jenssen, Bulat Ibragimov

*Corresponding author for this work

Research output: Contribution to journalConference articleResearchpeer-review

1 Downloads (Pure)

Abstract

Unsupervised representation learning has become an important ingredient of today's deep learning systems. However, only a few methods exist that explain a learned vector embedding in the sense of providing information about which parts of an input are the most important for its representation. These methods generate the explanation for a given input after the model has been evaluated and tend to produce either inaccurate explanations or are slow, which limits their practical use. To address these limitations, we introduce the Neural Explanation Masks (NEM) framework, which turns a fixed representation model into a self-explaining system by augmenting it with a masking neural network. This network provides occlusion-based explanations in parallel to computing the representations during inference. We present an instance of this framework, the NEM-U (NEM using U-net structure) architecture, which leverages similarities between segmentation and occlusion-based explanation masks. Our experiments show that NEM-U generates explanations faster and with lower complexity compared to the current state-of-the-art while maintaining high accuracy as measured by locality.

Original languageEnglish
JournalProceedings of Machine Learning Research
Volume235
Pages (from-to)36048-36071
Number of pages24
ISSN2640-3498
Publication statusPublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Conference

Conference41st International Conference on Machine Learning, ICML 2024
Country/TerritoryAustria
CityVienna
Period21/07/202427/07/2024

Bibliographical note

Publisher Copyright:
Copyright 2024 by the author(s)

Cite this