Improve interpretability of Information Bottlenecks for Attribution with Layer-wise Relevance Propagation
| dc.contributor.author | Chen, X. | |
| dc.contributor.author | Li, J. | |
| dc.contributor.author | Liu, J. | |
| dc.contributor.author | Peters, S. | |
| dc.contributor.author | Liu, L. | |
| dc.contributor.author | Le, T.D. | |
| dc.contributor.author | Walsh, A. | |
| dc.contributor.conference | 2023 IEEE International Conference on Big Data (BigData) (15 Dec 2023 - 18 Dec 2023 : Sorrento, Italy) | |
| dc.contributor.editor | He, J. | |
| dc.date.issued | 2023 | |
| dc.description.abstract | Researchers have developed various visualization techniques, such as attribution maps, to understand which parts of an input contribute most to a model’s decision. However, existing methods often produce disparate results and may lack human-perceptual interpretability. In this work, we propose Relevance-IBA, a novel approach that combines the strengths of Information Bottleneck Attribution (IBA) and Layer-wise Relevance Propagation’s (LRP) method to estimate more accurate and human-perceptually interpretable attribution maps. Our method accentuates the contours and subtle details of the identified object, making the model’s decisions more intuitively understandable. Additionally, we introduce a segmentation-oriented evaluation technique, which assesses the capacity of interpretability methods by emphasizing the most important pixels within an object’s boundaries. We benchmark Relevance-IBA against various methodologies, including DeepLIFT, Integrated Gradients, Guided-BP, Guided-GradCAM, IBA, and InputIBA. Our results indicate that Relevance-IBA not only boosts attribution accuracy but also prioritizes human-perceptual clarity, making it a valuable tool for interpreting complex model behaviors. | |
| dc.identifier.citation | Proceedings 2023 IEEE International Conference on Big Data Bigdata 2023, 2023 / He, J. (ed./s), pp.1064-1069 | |
| dc.identifier.doi | 10.1109/BigData59044.2023.10386271 | |
| dc.identifier.isbn | 9798350324457 | |
| dc.identifier.orcid | Liu, J. [0000-0002-0794-0404] | |
| dc.identifier.orcid | Le, T.D. [0000-0002-9732-4313] | |
| dc.identifier.uri | https://hdl.handle.net/11541.2/37896 | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.publisher.place | US | |
| dc.rights | Copyright 2023 IEEE | |
| dc.source.uri | https://doi.org/10.1109/bigdata59044.2023.10386271 | |
| dc.subject | IBA | |
| dc.subject | attribution maps | |
| dc.subject | interpretability | |
| dc.subject | LRP | |
| dc.subject | human-perceptual | |
| dc.title | Improve interpretability of Information Bottlenecks for Attribution with Layer-wise Relevance Propagation | |
| dc.type | Conference paper | |
| pubs.publication-status | Published | |
| ror.mmsid | 9916831827901831 |