Self-supervised Adversarial Training of Monocular Depth Estimation against Physical-World Attacks
- URL: http://arxiv.org/abs/2406.05857v1
- Date: Sun, 9 Jun 2024 17:02:28 GMT
- Title: Self-supervised Adversarial Training of Monocular Depth Estimation against Physical-World Attacks
- Authors: Zhiyuan Cheng, Cheng Han, James Liang, Qifan Wang, Xiangyu Zhang, Dongfang Liu,
- Abstract summary: Monocular Depth Estimation plays a vital role in applications such as autonomous driving.
Traditional adversarial training methods, which require ground-truth labels, are not directly applicable to MDE models that lack ground-truth depth.
We introduce a novel self-supervised adversarial training approach for MDE models, leveraging view synthesis without the need for ground-truth depth.
- Score: 36.16206095819624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monocular Depth Estimation (MDE) plays a vital role in applications such as autonomous driving. However, various attacks target MDE models, with physical attacks posing significant threats to system security. Traditional adversarial training methods, which require ground-truth labels, are not directly applicable to MDE models that lack ground-truth depth. Some self-supervised model hardening techniques (e.g., contrastive learning) overlook the domain knowledge of MDE, resulting in suboptimal performance. In this work, we introduce a novel self-supervised adversarial training approach for MDE models, leveraging view synthesis without the need for ground-truth depth. We enhance adversarial robustness against real-world attacks by incorporating L_0-norm-bounded perturbation during training. We evaluate our method against supervised learning-based and contrastive learning-based approaches specifically designed for MDE. Our experiments with two representative MDE networks demonstrate improved robustness against various adversarial attacks, with minimal impact on benign performance.
Related papers
- Improving Domain Generalization in Self-supervised Monocular Depth Estimation via Stabilized Adversarial Training [61.35809887986553]
We propose a general adversarial training framework, named Stabilized Conflict-optimization Adversarial Training (SCAT)
SCAT integrates adversarial data augmentation into self-supervised MDE methods to achieve a balance between stability and generalization.
Experiments on five benchmarks demonstrate that SCAT can achieve state-of-the-art performance and significantly improve the generalization capability of existing self-supervised MDE methods.
arXiv Detail & Related papers (2024-11-04T15:06:57Z) - Black-box Adversarial Attacks against Dense Retrieval Models: A
Multi-view Contrastive Learning Method [115.29382166356478]
We introduce the adversarial retrieval attack (AREA) task.
It is meant to trick DR models into retrieving a target document that is outside the initial set of candidate documents retrieved by the DR model.
We find that the promising results that have previously been reported on attacking NRMs, do not generalize to DR models.
We propose to formalize attacks on DR models as a contrastive learning problem in a multi-view representation space.
arXiv Detail & Related papers (2023-08-19T00:24:59Z) - MIXPGD: Hybrid Adversarial Training for Speech Recognition Systems [18.01556863687433]
We propose mixPGD adversarial training method to improve robustness of the model for ASR systems.
In standard adversarial training, adversarial samples are generated by leveraging supervised or unsupervised methods.
We merge the capabilities of both supervised and unsupervised approaches in our method to generate new adversarial samples which aid in improving model robustness.
arXiv Detail & Related papers (2023-03-10T07:52:28Z) - Adversarial Training of Self-supervised Monocular Depth Estimation
against Physical-World Attacks [17.28712660119884]
We propose a novel adversarial training method for self-supervised MDE models based on synthesis view without using ground-truth depth.
We improve adversarial robustness against physical-world attacks using L0-norm-bounded perturbation in training.
Results on two representative MDE networks show that we achieve better robustness against various adversarial attacks with nearly no benign performance degradation.
arXiv Detail & Related papers (2023-01-31T09:12:16Z) - RobustPdM: Designing Robust Predictive Maintenance against Adversarial
Attacks [0.0]
We show that adversarial attacks can cause a severe defect (up to 11X) in the RUL prediction, outperforming the effectiveness of the state-of-the-art PdM attacks by 3X.
We also present a novel approximate adversarial training method to defend against adversarial attacks.
arXiv Detail & Related papers (2023-01-25T20:49:12Z) - Self-Ensemble Adversarial Training for Improved Robustness [14.244311026737666]
Adversarial training is the strongest strategy against various adversarial attacks among all sorts of defense methods.
Recent works mainly focus on developing new loss functions or regularizers, attempting to find the unique optimal point in the weight space.
We devise a simple but powerful emphSelf-Ensemble Adversarial Training (SEAT) method for yielding a robust classifier by averaging weights of history models.
arXiv Detail & Related papers (2022-03-18T01:12:18Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.