X-Detect: Explainable Adversarial Patch Detection for Object Detectors
in Retail
- URL: http://arxiv.org/abs/2306.08422v2
- Date: Sun, 2 Jul 2023 06:39:59 GMT
- Title: X-Detect: Explainable Adversarial Patch Detection for Object Detectors
in Retail
- Authors: Omer Hofman, Amit Giloni, Yarin Hayun, Ikuya Morikawa, Toshiya
Shimizu, Yuval Elovici and Asaf Shabtai
- Abstract summary: Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks.
We present X-Detect, a novel adversarial patch detector that can detect adversarial samples in real time.
X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques.
- Score: 38.10544338096162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection models, which are widely used in various domains (such as
retail), have been shown to be vulnerable to adversarial attacks. Existing
methods for detecting adversarial attacks on object detectors have had
difficulty detecting new real-life attacks. We present X-Detect, a novel
adversarial patch detector that can: i) detect adversarial samples in real
time, allowing the defender to take preventive action; ii) provide explanations
for the alerts raised to support the defender's decision-making process, and
iii) handle unfamiliar threats in the form of new attacks. Given a new scene,
X-Detect uses an ensemble of explainable-by-design detectors that utilize
object extraction, scene manipulation, and feature transformation techniques to
determine whether an alert needs to be raised. X-Detect was evaluated in both
the physical and digital space using five different attack scenarios (including
adaptive attacks) and the COCO dataset and our new Superstore dataset. The
physical evaluation was performed using a smart shopping cart setup in
real-world settings and included 17 adversarial patch attacks recorded in 1,700
adversarial videos. The results showed that X-Detect outperforms the
state-of-the-art methods in distinguishing between benign and adversarial
scenes for all attack scenarios while maintaining a 0% FPR (no false alarms)
and providing actionable explanations for the alerts raised. A demo is
available.
Related papers
- New Adversarial Image Detection Based on Sentiment Analysis [37.139957973240264]
adversarial attack models, e.g., DeepFool, are on the rise and outrunning adversarial example detection techniques.
This paper presents a new adversarial example detector that outperforms state-of-the-art detectors in identifying the latest adversarial attacks on image datasets.
arXiv Detail & Related papers (2023-05-03T14:32:21Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - BadDet: Backdoor Attacks on Object Detection [42.40418007499009]
We propose four kinds of backdoor attacks for object detection task.
A trigger can falsely generate an object of the target class.
A single trigger can change the predictions of all objects in an image to the target class.
arXiv Detail & Related papers (2022-05-28T18:02:11Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - We Can Always Catch You: Detecting Adversarial Patched Objects WITH or
WITHOUT Signature [3.5272597442284104]
In this paper, we explore the detection problems about the adversarial patch attacks to the object detection.
A fast signature-based defense method is proposed and demonstrated to be effective.
The newly generated adversarial patches can successfully evade the proposed signature-based defense.
We present a novel signature-independent detection method based on the internal content semantics consistency.
arXiv Detail & Related papers (2021-06-09T17:58:08Z) - Adversarial Detection and Correction by Matching Prediction
Distributions [0.0]
The detector almost completely neutralises powerful attacks like Carlini-Wagner or SLIDE on MNIST and Fashion-MNIST.
We show that our method is still able to detect the adversarial examples in the case of a white-box attack where the attacker has full knowledge of both the model and the defence.
arXiv Detail & Related papers (2020-02-21T15:45:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.