Unraveling Instance Associations: A Closer Look for Audio-Visual Segmentation

dc.contributor.authorChen, Y.
dc.contributor.authorLiu, Y.
dc.contributor.authorWang, H.
dc.contributor.authorLiu, F.
dc.contributor.authorWang, C.
dc.contributor.authorFrazer, H.
dc.contributor.authorCarneiro, G.
dc.contributor.conferenceIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (16 Jun 2024 - 22 Jun 2024 : Seattle, WA, USA)
dc.date.issued2024
dc.description.abstractAudio-visual segmentation (AVS) is a challenging task that involves accurately segmenting sounding objects based on audio-visual cues. The effectiveness of audio-visual learning critically depends on achieving accurate crossmodal alignment between sound and visual objects. Successful audio-visual learning requires two essential components: 1) a challenging dataset with high-quality pixel-level multi-class annotated images associated with audio files, and 2) a model that can establish strong links between audio information and its corresponding visual object. However, these requirements are only partially addressed by current methods, with training sets containing biased audiovisual data, and models that generalise poorly beyond this biased training set. In this work, we propose a new costeffective strategy to build challenging and relatively unbiased high-quality audio-visual segmentation benchmarks. We also propose a new informative sample mining method for audio-visual supervised contrastive learning to leverage discriminative contrastive samples to enforce cross-modal understanding. We show empirical results that demonstrate the effectiveness of our benchmark. Furthermore, experiments conducted on existing AVS datasets and on our new benchmark show that our method achieves state-of-the-art (SOTA) segmentation accuracy¹
dc.description.statementofresponsibilityYuanhong Chen, Yuyuan Liu, Hu Wang, Fengbei Liu, Chong Wang, Helen Frazer, Gustavo Carneiro
dc.identifier.citationProceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2024, pp.26487-26497
dc.identifier.doi10.1109/CVPR52733.2024.02502
dc.identifier.isbn979-8-3503-5301-3
dc.identifier.issn1063-6919
dc.identifier.issn2575-7075
dc.identifier.orcidChen, Y. [0000-0002-8983-2895]
dc.identifier.orcidLiu, F. [0000-0003-0355-2006]
dc.identifier.orcidWang, C. [0000-0003-0022-0217]
dc.identifier.orcidCarneiro, G. [0000-0002-5571-6220]
dc.identifier.urihttps://hdl.handle.net/2440/145992
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.granthttp://purl.org/au-research/grants/arc/FT190100525
dc.relation.ispartofseriesIEEE Conference on Computer Vision and Pattern Recognition
dc.rights©2024 IEEE
dc.source.urihttps://ieeexplore.ieee.org/xpl/conhome/10654794/proceeding
dc.titleUnraveling Instance Associations: A Closer Look for Audio-Visual Segmentation
dc.typeConference paper
pubs.publication-statusPublished

Files

Collections