Indexed by:
Abstract:
Class-incremental learning (CIL) has been widely applied in the real world due to its flexibility and scalability. Recent advancements in CIL have achieved outstanding performance. However, deep neural networks, including CIL models, face challenges in resisting adversarial attacks. Presently, the majority of research in CIL focuses on alleviating catastrophic overfitting, while lacking comprehensive exploration into enhancing adversarial robustness. To this end, we introduce a novel CIL framework called the Perturbation Volume-up Framework (PVF). This framework divides each epoch into multiple iterations, wherein three main tasks are performed sequentially: intensifying adversarial data, extracting new knowledge, and reinforcing old knowledge. To intensify adversarial data, we propose the Fused Robustness Augmentation (FRA) approach. This method incorporates more generalized knowledge into the adversarial data by randomly blending data and leveraging finely-tuned Jensen-Shannon (JS) divergence. For the remaining two tasks, we introduce a set of regularization techniques known as Knowledge Inspiration Regularization (KIR). This regularization employs innovative classification and distillation losses to enhance the model’s generalization performance while preserving previously learned knowledge. Extensive experiments have demonstrated the effectiveness of our method in enhancing adversarial robustness of CIL models. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
Keyword:
Reprint 's Address:
Email:
Source :
ISSN: 0302-9743
Year: 2025
Volume: 15032 LNCS
Page: 61-75
Language: English
0 . 4 0 2
JCR@2005
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: