State-of-the-art optical-based physical adversarial attacks for deep
learning computer vision systems
- URL: http://arxiv.org/abs/2303.12249v1
- Date: Wed, 22 Mar 2023 01:14:52 GMT
- Title: State-of-the-art optical-based physical adversarial attacks for deep
learning computer vision systems
- Authors: Junbin Fang, You Jiang, Canjian Jiang, Zoe L. Jiang, Siu-Ming Yiu,
Chuanyi Liu
- Abstract summary: Adversarial attacks can mislead deep learning models to make false predictions by implanting small perturbations to the original input that are imperceptible to the human eye.
Physical adversarial attacks, which is more realistic, as the perturbation is introduced to the input before it is being captured and converted to a binary image.
This paper focuses on optical-based physical adversarial attack techniques for computer vision systems.
- Score: 3.3470481105928216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks can mislead deep learning models to make false
predictions by implanting small perturbations to the original input that are
imperceptible to the human eye, which poses a huge security threat to the
computer vision systems based on deep learning. Physical adversarial attacks,
which is more realistic, as the perturbation is introduced to the input before
it is being captured and converted to a binary image inside the vision system,
when compared to digital adversarial attacks. In this paper, we focus on
physical adversarial attacks and further classify them into invasive and
non-invasive. Optical-based physical adversarial attack techniques (e.g. using
light irradiation) belong to the non-invasive category. As the perturbations
can be easily ignored by humans as the perturbations are very similar to the
effects generated by a natural environment in the real world. They are highly
invisibility and executable and can pose a significant or even lethal threats
to real systems. This paper focuses on optical-based physical adversarial
attack techniques for computer vision systems, with emphasis on the
introduction and discussion of optical-based physical adversarial attack
techniques.
Related papers
- Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
Optical Perturbations [17.761200546223442]
Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems.
We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples.
arXiv Detail & Related papers (2023-07-24T21:16:38Z) - Visually Adversarial Attacks and Defenses in the Physical World: A
Survey [27.40548512511512]
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision.
arXiv Detail & Related papers (2022-11-03T09:28:45Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - A Survey on Physical Adversarial Attack in Computer Vision [7.053905447737444]
Deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial examples crafted by malicious tiny noise.
With the increasing deployment of the DNN-based system in the real world, strengthening the robustness of these systems is an emergency.
arXiv Detail & Related papers (2022-09-28T17:23:52Z) - Adversarially trained neural representations may already be as robust as
corresponding biological neural representations [66.73634912993006]
We develop a method for performing adversarial visual attacks directly on primate brain activity.
We report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
arXiv Detail & Related papers (2022-06-19T04:15:29Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Adversarial Machine Learning for Cybersecurity and Computer Vision:
Current Developments and Challenges [2.132096006921048]
Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques.
We first discuss three main categories of attacks against machine learning techniques -- poisoning attacks, evasion attacks, and privacy attacks.
We notice adversarial samples in cybersecurity and computer vision are fundamentally different.
arXiv Detail & Related papers (2021-06-30T03:05:58Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Adversarial Light Projection Attacks on Face Recognition Systems: A
Feasibility Study [21.42041262836322]
We investigate the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections.
The adversary generates a digital adversarial pattern using one or more images of the target available to the adversary.
The digital adversarial pattern is then projected onto the adversary's face in the physical domain to either impersonate a target (impersonation) or evade recognition (obfuscation)
arXiv Detail & Related papers (2020-03-24T23:06:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.