DetectorGuard: Provably Securing Object Detectors against Localized
Patch Hiding Attacks
- URL: http://arxiv.org/abs/2102.02956v1
- Date: Fri, 5 Feb 2021 02:02:21 GMT
- Title: DetectorGuard: Provably Securing Object Detectors against Localized
Patch Hiding Attacks
- Authors: Chong Xiang, Prateek Mittal
- Abstract summary: State-of-the-art object detectors are vulnerable to localized patch hiding attacks.
We propose the first general framework for building provably robust detectors against the localized patch hiding attack called DetectorGuard.
- Score: 28.94435153159868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art object detectors are vulnerable to localized patch hiding
attacks where an adversary introduces a small adversarial patch to make
detectors miss the detection of salient objects. In this paper, we propose the
first general framework for building provably robust detectors against the
localized patch hiding attack called DetectorGuard. To start with, we propose a
general approach for transferring the robustness from image classifiers to
object detectors, which builds a bridge between robust image classification and
robust object detection. We apply a provably robust image classifier to a
sliding window over the image and aggregates robust window classifications at
different locations for a robust object detection. Second, in order to mitigate
the notorious trade-off between clean performance and provable robustness, we
use a prediction pipeline in which we compare the outputs of a conventional
detector and a robust detector for catching an ongoing attack. When no attack
is detected, DetectorGuard outputs the precise bounding boxes predicted by the
conventional detector to achieve a high clean performance; otherwise,
DetectorGuard triggers an attack alert for security. Notably, our prediction
strategy ensures that the robust detector incorrectly missing objects will not
hurt the clean performance of DetectorGuard. Moreover, our approach allows us
to formally prove the robustness of DetectorGuard on certified objects, i.e.,
it either detects the object or triggers an alert, against any patch hiding
attacker. Our evaluation on the PASCAL VOC and MS COCO datasets shows that
DetectorGuard has the almost same clean performance as conventional detectors,
and more importantly, that DetectorGuard achieves the first provable robustness
against localized patch hiding attacks.
Related papers
- Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving [17.637155085620634]
Detector Collapse (DC) is a brand-new backdoor attack paradigm tailored for object detection.
DC is designed to instantly incapacitate detectors (i.e., severely impairing detector's performance and culminating in a denial-of-service)
We introduce a novel poisoning strategy exploiting natural objects, enabling DC to act as a practical backdoor in real-world environments.
arXiv Detail & Related papers (2024-04-17T13:12:14Z) - Towards Building Self-Aware Object Detectors via Reliable Uncertainty
Quantification and Calibration [17.461451218469062]
In this work, we introduce the Self-Aware Object Detection (SAOD) task.
The SAOD task respects and adheres to the challenges that object detectors face in safety-critical environments such as autonomous driving.
We extensively use our framework, which introduces novel metrics and large scale test datasets, to test numerous object detectors.
arXiv Detail & Related papers (2023-07-03T11:16:39Z) - On the Importance of Backbone to the Adversarial Robustness of Object
Detectors [26.712934402914854]
We argue that using adversarially pre-trained backbone networks is essential for enhancing the adversarial robustness of object detectors.
We propose a simple yet effective recipe for fast adversarial fine-tuning on object detectors with adversarially pre-trained backbones.
Our empirical results set a new milestone and deepen the understanding of adversarially robust object detection.
arXiv Detail & Related papers (2023-05-27T10:26:23Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Transferable Adversarial Examples for Anchor Free Object Detection [44.7397139463144]
We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
arXiv Detail & Related papers (2021-06-03T06:38:15Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - The Translucent Patch: A Physical and Universal Attack on Object
Detectors [48.31712758860241]
We propose a contactless physical patch to fool state-of-the-art object detectors.
The primary goal of our patch is to hide all instances of a selected target class.
We show that our patch was able to prevent the detection of 42.27% of all stop sign instances.
arXiv Detail & Related papers (2020-12-23T07:47:13Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.