Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

CNN adversarial attack mitigation using perturbed samples training

Hashemi, Atiye Sadat LU and Mozaffari, Saeed (2021) In Multimedia Tools and Applications 80. p.22077-22095
Abstract
Susceptibility to adversarial examples is one of the major concerns in convolutional neural networks (CNNs) applications. Training the model with adversarial examples, known as adversarial training, is a common countermeasure to tackle such attacks. In reality, however, defenders are uninformed about how adversarial examples are generated by the attacker. Therefore, it is pivotal to utilize more general alternatives to intrinsically improve the robustness of models. For this purpose, we train CNNs with perturbed samples manipulated by various transformations and contaminated by different noises to foster robustness of networks against adversarial attacks. This idea derived from the fact that both adversarial and noisy samples undermine the... (More)
Susceptibility to adversarial examples is one of the major concerns in convolutional neural networks (CNNs) applications. Training the model with adversarial examples, known as adversarial training, is a common countermeasure to tackle such attacks. In reality, however, defenders are uninformed about how adversarial examples are generated by the attacker. Therefore, it is pivotal to utilize more general alternatives to intrinsically improve the robustness of models. For this purpose, we train CNNs with perturbed samples manipulated by various transformations and contaminated by different noises to foster robustness of networks against adversarial attacks. This idea derived from the fact that both adversarial and noisy samples undermine the classifier accuracy. We propose combination of a convolutional denoising autoencoder with a classifier (CDAEC) as a defensive structure. The proposed method does not add to the computational cost. Experimental results on MNIST database demonstrate that the accuracy of CDAEC trained by perturbed samples against adversarial attacks was more than 71.29%. (Less)
Please use this url to cite or link to this publication:
author
and
publishing date
type
Contribution to journal
publication status
published
in
Multimedia Tools and Applications
volume
80
pages
22077 - 22095
external identifiers
  • scopus:85103180039
DOI
10.1007/s11042-020-10379-6
language
English
LU publication?
no
id
eb34f17b-cc07-4ced-bb12-5a9dbb0f301e
date added to LUP
2025-01-31 14:26:14
date last changed
2025-02-03 08:27:16
@article{eb34f17b-cc07-4ced-bb12-5a9dbb0f301e,
  abstract     = {{Susceptibility to adversarial examples is one of the major concerns in convolutional neural networks (CNNs) applications. Training the model with adversarial examples, known as adversarial training, is a common countermeasure to tackle such attacks. In reality, however, defenders are uninformed about how adversarial examples are generated by the attacker. Therefore, it is pivotal to utilize more general alternatives to intrinsically improve the robustness of models. For this purpose, we train CNNs with perturbed samples manipulated by various transformations and contaminated by different noises to foster robustness of networks against adversarial attacks. This idea derived from the fact that both adversarial and noisy samples undermine the classifier accuracy. We propose combination of a convolutional denoising autoencoder with a classifier (CDAEC) as a defensive structure. The proposed method does not add to the computational cost. Experimental results on MNIST database demonstrate that the accuracy of CDAEC trained by perturbed samples against adversarial attacks was more than 71.29%.}},
  author       = {{Hashemi, Atiye Sadat and Mozaffari, Saeed}},
  language     = {{eng}},
  pages        = {{22077--22095}},
  series       = {{Multimedia Tools and Applications}},
  title        = {{CNN adversarial attack mitigation using perturbed samples training}},
  url          = {{http://dx.doi.org/10.1007/s11042-020-10379-6}},
  doi          = {{10.1007/s11042-020-10379-6}},
  volume       = {{80}},
  year         = {{2021}},
}