Improve interpretability of Information Bottlenecks for Attribution with Layer-wise Relevance Propagation

dc.contributor.authorChen, X.
dc.contributor.authorLi, J.
dc.contributor.authorLiu, J.
dc.contributor.authorPeters, S.
dc.contributor.authorLiu, L.
dc.contributor.authorLe, T.D.
dc.contributor.authorWalsh, A.
dc.contributor.conference2023 IEEE International Conference on Big Data (BigData) (15 Dec 2023 - 18 Dec 2023 : Sorrento, Italy)
dc.contributor.editorHe, J.
dc.date.issued2023
dc.description.abstractResearchers have developed various visualization techniques, such as attribution maps, to understand which parts of an input contribute most to a model’s decision. However, existing methods often produce disparate results and may lack human-perceptual interpretability. In this work, we propose Relevance-IBA, a novel approach that combines the strengths of Information Bottleneck Attribution (IBA) and Layer-wise Relevance Propagation’s (LRP) method to estimate more accurate and human-perceptually interpretable attribution maps. Our method accentuates the contours and subtle details of the identified object, making the model’s decisions more intuitively understandable. Additionally, we introduce a segmentation-oriented evaluation technique, which assesses the capacity of interpretability methods by emphasizing the most important pixels within an object’s boundaries. We benchmark Relevance-IBA against various methodologies, including DeepLIFT, Integrated Gradients, Guided-BP, Guided-GradCAM, IBA, and InputIBA. Our results indicate that Relevance-IBA not only boosts attribution accuracy but also prioritizes human-perceptual clarity, making it a valuable tool for interpreting complex model behaviors.
dc.identifier.citationProceedings 2023 IEEE International Conference on Big Data Bigdata 2023, 2023 / He, J. (ed./s), pp.1064-1069
dc.identifier.doi10.1109/BigData59044.2023.10386271
dc.identifier.isbn9798350324457
dc.identifier.orcidLiu, J. [0000-0002-0794-0404]
dc.identifier.orcidLe, T.D. [0000-0002-9732-4313]
dc.identifier.urihttps://hdl.handle.net/11541.2/37896
dc.language.isoen
dc.publisherIEEE
dc.publisher.placeUS
dc.rightsCopyright 2023 IEEE
dc.source.urihttps://doi.org/10.1109/bigdata59044.2023.10386271
dc.subjectIBA
dc.subjectattribution maps
dc.subjectinterpretability
dc.subjectLRP
dc.subjecthuman-perceptual
dc.titleImprove interpretability of Information Bottlenecks for Attribution with Layer-wise Relevance Propagation
dc.typeConference paper
pubs.publication-statusPublished
ror.mmsid9916831827901831

Files

Collections