Improve interpretability of Information Bottlenecks for Attribution with Layer-wise Relevance Propagation

Date

2023

Authors

Chen, X.
Li, J.
Liu, J.
Peters, S.
Liu, L.
Le, T.D.
Walsh, A.

Editors

He, J.

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

Proceedings 2023 IEEE International Conference on Big Data Bigdata 2023, 2023 / He, J. (ed./s), pp.1064-1069

Statement of Responsibility

Conference Name

2023 IEEE International Conference on Big Data (BigData) (15 Dec 2023 - 18 Dec 2023 : Sorrento, Italy)

Abstract

Researchers have developed various visualization techniques, such as attribution maps, to understand which parts of an input contribute most to a model’s decision. However, existing methods often produce disparate results and may lack human-perceptual interpretability. In this work, we propose Relevance-IBA, a novel approach that combines the strengths of Information Bottleneck Attribution (IBA) and Layer-wise Relevance Propagation’s (LRP) method to estimate more accurate and human-perceptually interpretable attribution maps. Our method accentuates the contours and subtle details of the identified object, making the model’s decisions more intuitively understandable. Additionally, we introduce a segmentation-oriented evaluation technique, which assesses the capacity of interpretability methods by emphasizing the most important pixels within an object’s boundaries. We benchmark Relevance-IBA against various methodologies, including DeepLIFT, Integrated Gradients, Guided-BP, Guided-GradCAM, IBA, and InputIBA. Our results indicate that Relevance-IBA not only boosts attribution accuracy but also prioritizes human-perceptual clarity, making it a valuable tool for interpreting complex model behaviors.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

Copyright 2023 IEEE

License

Grant ID

Call number

Persistent link to this record