Federated few-shot class-incremental learning

Date

2025

Authors

Ma'sum, M.A.
Pratama, M.
Liu, L.
Habibullah, H.
Kowalczyk, R.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

The Thirteenth International Conference on Learning Representations (ICLR 2025), 2025, pp.1-30

Statement of Responsibility

Conference Name

The Thirteenth International Conference on Learning Representations (ICLR 2025) (24 Apr 2025 : Singapore)

Abstract

This study proposes a challenging yet practical Federated Few-Shot Class-Incremental Learning (FFSCIL) problem, where clients only hold very few samples for new classes. We develop a novel Unified Optimized Prototype Prompt (UOPP) model to simultaneously handle catastrophic forgetting, over-fitting, and prototype bias in FFSCIL. UOPP utilizes task-wise prompt learning to mitigate task interference and over-fitting, unified static-dynamic prototypes to achieve a stability-plasticity balance, and adaptive dual heads for enhanced inferences. Dynamic prototypes represent new classes in the current few-shot task and are rectified to deal with prototype bias. Our comprehensive experimental results show that UOPP significantly outperforms state-of-the-art (SOTA) methods on three datasets with improvements up to 76% on average accuracy and 90% on harmonic mean accuracy respectively. Our extensive analysis shows UOPP robustness in various numbers of local clients and global rounds, low communication costs, and moderate running time. The source code of UOPP is publicly available at https://github.com/anwarmaxsum/FFSCIL.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

Copyright 2025 ICLR (https://creativecommons.org/licenses/by/4.0/)

License

Grant ID

Call number

Persistent link to this record