Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Catastrophic child's play : Easy to perform, hard to defend adversarial attacks

Ho, Chih Hui ; Leung, Brandon ; Sandstrom, Erik ; Chang, Yen and Vasconcelos, Nuno (2019) 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2019-June. p.9221-9229
Abstract

The problem of adversarial CNN attacks is considered, with an emphasis on attacks that are trivial to perform but difficult to defend. A framework for the study of such attacks is proposed, using real world object manipulations. Unlike most works in the past, this framework supports the design of attacks based on both small and large image perturbations, implemented by camera shake and pose variation. A setup is proposed for the collection of such perturbations and determination of their perceptibility. It is argued that perceptibility depends on context, and a distinction is made between imperceptible and semantically imperceptible perturbations. While the former survives image comparisons, the latter are perceptible but have no impact... (More)

The problem of adversarial CNN attacks is considered, with an emphasis on attacks that are trivial to perform but difficult to defend. A framework for the study of such attacks is proposed, using real world object manipulations. Unlike most works in the past, this framework supports the design of attacks based on both small and large image perturbations, implemented by camera shake and pose variation. A setup is proposed for the collection of such perturbations and determination of their perceptibility. It is argued that perceptibility depends on context, and a distinction is made between imperceptible and semantically imperceptible perturbations. While the former survives image comparisons, the latter are perceptible but have no impact on human object recognition. A procedure is proposed to determine the perceptibility of perturbations using Turk experiments, and a dataset of both perturbation classes which enables replicable studies of object manipulation attacks, is assembled. Experiments using defenses based on many datasets, CNN models, and algorithms from the literature elucidate the difficulty of defending these attacks-in fact, none of the existing defenses is found effective against them. Better results are achieved with real world data augmentation, but even this is not foolproof. These results confirm the hypothesis that current CNNs are vulnerable to attacks implementable even by a child, and that such attacks may prove difficult to defend.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; and
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
Deep Learning
host publication
Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
series title
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
volume
2019-June
article number
8954169
pages
9 pages
publisher
IEEE Computer Society
conference name
32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
conference location
Long Beach, United States
conference dates
2019-06-16 - 2019-06-20
external identifiers
  • scopus:85078726352
ISSN
1063-6919
ISBN
9781728132938
DOI
10.1109/CVPR.2019.00945
language
English
LU publication?
no
id
b12f6a08-6702-4872-a7f7-f86c43841091
date added to LUP
2020-02-10 14:57:00
date last changed
2022-04-18 20:31:33
@inproceedings{b12f6a08-6702-4872-a7f7-f86c43841091,
  abstract     = {{<p>The problem of adversarial CNN attacks is considered, with an emphasis on attacks that are trivial to perform but difficult to defend. A framework for the study of such attacks is proposed, using real world object manipulations. Unlike most works in the past, this framework supports the design of attacks based on both small and large image perturbations, implemented by camera shake and pose variation. A setup is proposed for the collection of such perturbations and determination of their perceptibility. It is argued that perceptibility depends on context, and a distinction is made between imperceptible and semantically imperceptible perturbations. While the former survives image comparisons, the latter are perceptible but have no impact on human object recognition. A procedure is proposed to determine the perceptibility of perturbations using Turk experiments, and a dataset of both perturbation classes which enables replicable studies of object manipulation attacks, is assembled. Experiments using defenses based on many datasets, CNN models, and algorithms from the literature elucidate the difficulty of defending these attacks-in fact, none of the existing defenses is found effective against them. Better results are achieved with real world data augmentation, but even this is not foolproof. These results confirm the hypothesis that current CNNs are vulnerable to attacks implementable even by a child, and that such attacks may prove difficult to defend.</p>}},
  author       = {{Ho, Chih Hui and Leung, Brandon and Sandstrom, Erik and Chang, Yen and Vasconcelos, Nuno}},
  booktitle    = {{Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019}},
  isbn         = {{9781728132938}},
  issn         = {{1063-6919}},
  keywords     = {{Deep Learning}},
  language     = {{eng}},
  month        = {{06}},
  pages        = {{9221--9229}},
  publisher    = {{IEEE Computer Society}},
  series       = {{Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition}},
  title        = {{Catastrophic child's play : Easy to perform, hard to defend adversarial attacks}},
  url          = {{http://dx.doi.org/10.1109/CVPR.2019.00945}},
  doi          = {{10.1109/CVPR.2019.00945}},
  volume       = {{2019-June}},
  year         = {{2019}},
}