Current Issue Cover
面向高光谱图像分类网络的对比半监督对抗训练方法

石程1, 刘莹1, 赵明华1, 苗启广2, 潘治文3(1.西安理工大学计算机科学与工程学院, 西安 710048;2.西安电子科技大学计算机科学与技术学院, 西安 710071;3.澳门大学科技学院电脑及资讯科学系, 澳门 999078)

摘 要
目的 深度神经网络在高光谱图像分类任务中表现出明显的优越性,但是对抗样本的出现使其鲁棒性受到严重威胁,对抗训练方法为深度神经网络提供了一种有效的保护策略,但是在有限标记样本下提高目标网络的鲁棒性和泛化能力仍然需要进一步研究。为此,本文提出了一种面向高光谱图像分类网络的对比半监督对抗训练方法。方法 首先,根据少量标记样本预训练目标模型,并同时利用少量标记样本和大量无标记样本构建训练样本集合;然后,通过最大化训练样本集合中干净样本和对抗样本在目标模型上的特征差异生成高迁移性对抗样本;最后,为了减少对抗训练过程对样本标签的依赖以及提高目标模型对困难对抗样本的学习和泛化能力,充分利用目标模型和预训练模型的输出层及中间层特征,构建对比对抗损失函数对目标模型进行优化,提高目标模型的对抗鲁棒性。对抗样本生成和目标网络优化过程交替进行,并且不需要样本标签的参与。结果 在 PaviaU 和 Indian Pines 两组高光谱图像数据集上与主流的 5 种对抗训练方法进行了比较,本文方法在防御已知攻击和多种未知攻击上均表现出明显的优越性。面对 6 种未知攻击,相比于监督对抗训练方法 AT(adversarial training)和 TRADES(trade-offbetween robustness and accuracy),本文方法分类精度在两个数据集上平均提高了 13. 3% 和 16%,相比于半监督对抗训练方法 SRT(semi-supervised robust training)、RST(robust self-training)和 MART(misclassification aware adversarialrisk training),本文方法分类精度再两个数据集上平均提高了 5. 6% 和 4. 4%。实验结果表明了提出模型的有效性。结论 本文方法能够在少量标记样本下提高高光谱图像分类网络的防御性能。
关键词
Contrastive semi-supervised adversarial training method for hyperspectral image classification networks

Shi Cheng1, Liu Ying1, Zhao Minghua1, Miao Qiguang2, Pun Chi-Man3(1.School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China;2.School of Computer Science and Technology, Xidian University, Xi'an 710071, China;3.Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau 999078, China)

Abstract
Objective Deep neural networks have demonstrated significant superiority in hyperspectral image classification tasks. However, the emergence of adversarial examples poses a serious threat to their robustness. Research on adversarial training methods provides an effective defense strategy for protecting deep neural networks. However, existing adversarial training methods often require a large number of labeled examples to enhance the robustness of deep neural networks, which increases the difficulty of labeling hyperspectral image examples. In addition, a critical limitation of current adversarial training approaches is that they usually do not capture intermediate layer features in the target network and pay less attention to challenging adversarial samples. This oversight can lead to the reduced generalization ability of the defense model. To further enhance the adversarial robustness of hyperspectral image classification networks with limited labeled examples, this paper proposes a contrastive semi-supervised adversarial training method.Method First, the target model is pre-trained using a small number of labeled examples. Second, for a large number of unlabeled examples, the corresponding adversarial examples are generated by maximizing the feature difference between clean unlabeled examples and adversarial examples on the target model. Adversarial samples generated using intermediate layer features of the network exhibit higher transferability compared with those generated only using output layer features. In contrast, feature-based adversarial sample generation methods do not rely on example labels. Therefore, we generate adversarial examples based on the intermediate layer features of the network. Third, the generated adversarial examples are used to enhance the robustness of the target model. The defense capabilities of the target model for the challenging adversarial samples are enhanced by defining the robust upper bound and robust lower bound of the target network based on the pre-trained target model, and a contrastive adversarial loss is designed on both intermediate feature layer and output layer to optimize the model based on the defined robust upper bound and robust lower bound. The defined contrastive loss function consists of three terms: classification loss, output contrastive loss, and feature contrastive loss. The classification loss is designed to maintain the classification accuracy of the target model for clean examples. The output contrastive loss encourages the output layer of the adversarial examples to move closer to the pre-defined output layer robust upper bound and away from the pre-defined output layer robust lower bound. The feature contrastive loss pushes the intermediate layer feature of the adversarial example closer to the pre-defined intermediate robust upper bound and away from the pre-defined intermediate robust lower bound. The proposed output contrastive adversarial loss and feature contrastive loss help improve the classification accuracy and generalization ability of the target network against challenging adversarial examples. The training process of adversarial example generation and target network optimization is performed iteratively, and example labels are not required in the training process. By incorporating a limited number of labeled examples in model training, both the output layer and intermediate feature layer are used to enhance the defense ability of the target model against known and unknown attack methods.Result We compared the proposed method with five mainstream adversarial training methods, two supervised adversarial training methods and three semi-supervised adversarial training methods, on the PaviaU and Indian Pines hyperspectral image datasets. Compared with the mainstream adversarial training methods, the proposed method demonstrates significant superiority in defending against both known and various unknown attacks. Faced with six unknown attacks, compared with the supervised adversarial training methods AT and TRADES, our method showed an average improvement in classification accuracy of 13.3% and 16%, respectively. Compared with the semi-supervised adversarial training methods SRT, RST, and MART, our method achieved an average improvement in classification accuracy of 5.6% and 4.4%, respectively. Compared with the target model without defense method, for example on the Inception_V3, the defense performance of the proposed method in the face of different attacks improved by 34.63%–92.78%.Conclusion The proposed contrastive semi-supervised adversarial training method can improve the defense performance of hyperspectral image classification networks with limited labeled examples. By maximizing the feature distance between clean examples and adversarial examples on the target model, we can generate highly transferable adversarial examples. To address the limitation of defense generalization ability imposed by the number of labeled examples, we define the concept of robust upper bound and robust lower bound based on the pre-trained target model and design an optimization model according to a contrastive semi-supervised loss function. By extensively leveraging the feature information provided by a few labeled examples and incorporating a large number of unlabeled examples, we can further enhance the generalization ability of the target model. The defense performance of the proposed method is superior to that of the supervised adversarial training methods.
Keywords

订阅号|日报