Indexed by:
Abstract:
Active learning (AL) tries to maximize the model's performance when the labeled data set is limited, and the annotation cost is high. Although it can be efficiently implemented in deep neural networks (DNNs), it is questionable whether the model can maintain the ability to generalize well when there are significant distributional deviations between the labeled and unlabeled data sets. In this article, we consider introducing adversarial training and adversarial samples into AL to mitigate the problem of degraded generalization performance due to different data distributions. In particular, our proposed adversarial training AL (ATAL) has two advantages, one is that adversarial training by different networks enables the network to have better prediction performance and robustness with limited labeled samples. The other is that the adversarial samples generated by the adversarial training can effectively expand the labeled data set so that the designed query function can efficiently select the most informative unlabeled samples based on the expanded labeled data set. Extensive experiments have been performed to verify the feasibility and efficiency of our proposed method, i.e., CIFAR-10 demonstrates the effectiveness of our method-new state-of-the-art robustness and accuracy are achieved.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE INTERNET OF THINGS JOURNAL
ISSN: 2327-4662
Year: 2024
Issue: 3
Volume: 11
Page: 4787-4800
8 . 2 0 0
JCR@2023
Affiliated Colleges: