CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models

Date

2024

Authors

Jha, S.
Gong, D.
Yao, L.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS 2024), as published in Advances in Neural Information Processing Systems, 2024, vol.37, pp.129146-129186

Statement of Responsibility

Saurav Jha, Dong Gong, Lina Yao

Conference Name

38th Conference on Neural Information Processing Systems (NeurIPS) (10 Dec 2024 - 15 Dec 2024 : Vancouver, Canada)

Abstract

Continual learning (CL) aims to help deep neural networks to learn new knowledge while retaining what has been learned. Owing to their powerful generalizability, pre-trained vision-language models such as Contrastive Language-Image Pre-training (CLIP) have lately gained traction as practical CL candidates. However, the domain mismatch between the pre-training and the downstream CL tasks calls for finetuning of the CLIP on the latter. The deterministic nature of the existing finetuning methods makes them overlook the many possible interactions across the modalities and deems them unsafe for high-risk tasks requiring reliable uncertainty estimation. To address these, our work proposes Continual LeArning with Probabilistic finetuning (CLAP) - a probabilistic modeling framework over visual-guided text features per task, thus providing more calibrated CL finetuning. Unlike recent data-hungry anti-forgetting CL techniques, CLAP alleviates forgetting by exploiting the rich pre-trained knowledge of CLIP for weight initialization and distribution regularization of task-specific parameters. Cooperating with the diverse range of existing prompting methods, CLAP can surpass the predominant deterministic finetuning approaches for CL with CLIP. We conclude with out-of-the-box applications of superior uncertainty estimation abilities of CLAP including novel data detection and exemplar selection within the existing CL setups. Our code is available at https://github.com/srvCodes/clap4clip.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© th author(s). Authors do not transfer the copyright of their papers to NeurIPS. Instead, they grant NeurIPS a non-exclusive, perpetual, royalty-free, fully-paid, fully-assignable license to copy, distribute and publicly display all or part of the paper.

License

Call number

Persistent link to this record