The goal of few-shot class incremental learning (FSCIL) is to learn new concepts from a limited number of novel samples while preserving the knowledge of previously learned classes. The mainstream FSCIL framework begins with training in the base session, after which the feature extractor is frozen to accommodate novel classes. We observed that traditional base-session training approaches often lead to overfitting on challenging samples, which can lead to reduced robustness in the decision boundaries and exacerbate the forgetting phenomenon when introducing incremental data. To address this issue, we proposed the progressive learning strategy (PGLS). First, inspired by curriculum learning, we developed a covariance noise perturbation approach based on the statistical information as a difficulty measure for assessing sample robustness. We then reweighted the samples based on their robustness, initially concentrating on enhancing model stability by prioritizing robust samples and subsequently leveraging weakly robust samples to improve generalization. Second, we predefined forward compatibility for various virtual class augmentation models. Within base class training, we employed a curriculum learning strategy that progressively introduced fewer to more virtual classes in order to mitigate any adverse effects on model performance. This strategy enhances the adaptability of base classes to novel ones and alleviates forgetting problems. Finally, extensive experiments conducted on the CUB200, CIFAR100, and miniImageNet datasets demonstrate the significant advantages of our proposed method over state-of-the-art models.
IEEE transactions on cybernetics. 2025 Jan 22:PP. doi: 10.1109/TCYB.2025.3525724 Q110.52024
Progressive Learning Strategy for Few-Shot Class-Incremental Learning
面向少量样本类增量学习的渐进式学习策略 翻译改进
作者单位 +展开
作者单位
DOI: 10.1109/TCYB.2025.3525724 PMID: 40031166
摘要 Ai翻译
Keywords:Progressive Learning Strategy