Fool the Hydra: Adversarial Attacks against Multi-view Object Detection
Systems
- URL: http://arxiv.org/abs/2312.00173v1
- Date: Thu, 30 Nov 2023 20:11:44 GMT
- Title: Fool the Hydra: Adversarial Attacks against Multi-view Object Detection
Systems
- Authors: Bilel Tarchoun, Quazi Mishkatul Alam, Nael Abu-Ghazaleh, Ihsen Alouani
- Abstract summary: Adrial patches exemplify the tangible manifestation of the threat posed by adversarial attacks on Machine Learning (ML) models in real-world scenarios.
Multiview object systems are able to combine data from multiple views, and reach reliable detection results even in difficult environments.
Despite its importance in real-world vision applications, the vulnerability of multiview systems to adversarial patches is not sufficiently investigated.
- Score: 3.4673556247932225
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Adversarial patches exemplify the tangible manifestation of the threat posed
by adversarial attacks on Machine Learning (ML) models in real-world scenarios.
Robustness against these attacks is of the utmost importance when designing
computer vision applications, especially for safety-critical domains such as
CCTV systems. In most practical situations, monitoring open spaces requires
multi-view systems to overcome acquisition challenges such as occlusion
handling. Multiview object systems are able to combine data from multiple
views, and reach reliable detection results even in difficult environments.
Despite its importance in real-world vision applications, the vulnerability of
multiview systems to adversarial patches is not sufficiently investigated. In
this paper, we raise the following question: Does the increased performance and
information sharing across views offer as a by-product robustness to
adversarial patches? We first conduct a preliminary analysis showing promising
robustness against off-the-shelf adversarial patches, even in an extreme
setting where we consider patches applied to all views by all persons in
Wildtrack benchmark. However, we challenged this observation by proposing two
new attacks: (i) In the first attack, targeting a multiview CNN, we maximize
the global loss by proposing gradient projection to the different views and
aggregating the obtained local gradients. (ii) In the second attack, we focus
on a Transformer-based multiview framework. In addition to the focal loss, we
also maximize the transformer-specific loss by dissipating its attention
blocks. Our results show a large degradation in the detection performance of
victim multiview systems with our first patch attack reaching an attack success
rate of 73% , while our second proposed attack reduced the performance of its
target detector by 62%
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Visual Adversarial Examples Jailbreak Aligned Large Language Models [66.53468356460365]
We show that the continuous and high-dimensional nature of the visual input makes it a weak link against adversarial attacks.
We exploit visual adversarial examples to circumvent the safety guardrail of aligned LLMs with integrated vision.
Our study underscores the escalating adversarial risks associated with the pursuit of multimodality.
arXiv Detail & Related papers (2023-06-22T22:13:03Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - ExploreADV: Towards exploratory attack for Neural Networks [0.33302293148249124]
ExploreADV is a general and flexible adversarial attack system that is capable of modeling regional and imperceptible attacks.
We show that our system offers users good flexibility to focus on sub-regions of inputs, explore imperceptible perturbations and understand the vulnerability of pixels/regions to adversarial attacks.
arXiv Detail & Related papers (2023-01-01T07:17:03Z) - Physical Passive Patch Adversarial Attacks on Visual Odometry Systems [6.391337032993737]
We study patch adversarial attacks on visual odometry-based autonomous navigation systems.
We show for the first time that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene.
arXiv Detail & Related papers (2022-07-11T14:41:06Z) - Adversarial Attacks in a Multi-view Setting: An Empirical Study of the
Adversarial Patches Inter-view Transferability [3.1542695050861544]
Adversarial attacks consist of additive noise to an input which can fool a detector.
Recent successful real-world printable adversarial patches were proven efficient against state-of-the-art neural networks.
We study the effect of view angle on the effectiveness of an adversarial patch.
arXiv Detail & Related papers (2021-10-10T19:59:28Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.