Adversarial Training of Self-supervised Monocular Depth Estimation
against Physical-World Attacks
- URL: http://arxiv.org/abs/2301.13487v3
- Date: Sun, 2 Apr 2023 09:49:17 GMT
- Title: Adversarial Training of Self-supervised Monocular Depth Estimation
against Physical-World Attacks
- Authors: Zhiyuan Cheng, James Liang, Guanhong Tao, Dongfang Liu, Xiangyu Zhang
- Abstract summary: We propose a novel adversarial training method for self-supervised MDE models based on synthesis view without using ground-truth depth.
We improve adversarial robustness against physical-world attacks using L0-norm-bounded perturbation in training.
Results on two representative MDE networks show that we achieve better robustness against various adversarial attacks with nearly no benign performance degradation.
- Score: 17.28712660119884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monocular Depth Estimation (MDE) is a critical component in applications such
as autonomous driving. There are various attacks against MDE networks. These
attacks, especially the physical ones, pose a great threat to the security of
such systems. Traditional adversarial training method requires ground-truth
labels hence cannot be directly applied to self-supervised MDE that does not
have ground-truth depth. Some self-supervised model hardening techniques (e.g.,
contrastive learning) ignore the domain knowledge of MDE and can hardly achieve
optimal performance. In this work, we propose a novel adversarial training
method for self-supervised MDE models based on view synthesis without using
ground-truth depth. We improve adversarial robustness against physical-world
attacks using L0-norm-bounded perturbation in training. We compare our method
with supervised learning based and contrastive learning based methods that are
tailored for MDE. Results on two representative MDE networks show that we
achieve better robustness against various adversarial attacks with nearly no
benign performance degradation.
Related papers
- Improving Domain Generalization in Self-supervised Monocular Depth Estimation via Stabilized Adversarial Training [61.35809887986553]
We propose a general adversarial training framework, named Stabilized Conflict-optimization Adversarial Training (SCAT)
SCAT integrates adversarial data augmentation into self-supervised MDE methods to achieve a balance between stability and generalization.
Experiments on five benchmarks demonstrate that SCAT can achieve state-of-the-art performance and significantly improve the generalization capability of existing self-supervised MDE methods.
arXiv Detail & Related papers (2024-11-04T15:06:57Z) - Self-supervised Adversarial Training of Monocular Depth Estimation against Physical-World Attacks [36.16206095819624]
Monocular Depth Estimation plays a vital role in applications such as autonomous driving.
Traditional adversarial training methods, which require ground-truth labels, are not directly applicable to MDE models that lack ground-truth depth.
We introduce a novel self-supervised adversarial training approach for MDE models, leveraging view synthesis without the need for ground-truth depth.
arXiv Detail & Related papers (2024-06-09T17:02:28Z) - To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now [22.75295925610285]
diffusion models (DMs) have revolutionized the generation of realistic and complex images.
DMs also introduce potential safety hazards, such as producing harmful content and infringing data copyrights.
Despite the development of safety-driven unlearning techniques, doubts about their efficacy persist.
arXiv Detail & Related papers (2023-10-18T10:36:34Z) - SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation [5.476763798688862]
We propose a novel underlineStealthy underlineAdversarial underlineAttacks on underlineMDE (SAAM)
It compromises MDE by either corrupting the estimated distance or causing an object to seamlessly blend into its surroundings.
We believe that this work sheds light on the threat of adversarial attacks in the context of MDE on edge devices.
arXiv Detail & Related papers (2023-08-06T13:29:42Z) - RobustPdM: Designing Robust Predictive Maintenance against Adversarial
Attacks [0.0]
We show that adversarial attacks can cause a severe defect (up to 11X) in the RUL prediction, outperforming the effectiveness of the state-of-the-art PdM attacks by 3X.
We also present a novel approximate adversarial training method to defend against adversarial attacks.
arXiv Detail & Related papers (2023-01-25T20:49:12Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - Stochastic Security: Adversarial Defense Using Long-Run Dynamics of
Energy-Based Models [82.03536496686763]
The vulnerability of deep networks to adversarial attacks is a central problem for deep learning from the perspective of both cognition and security.
We focus on defending naturally-trained classifiers using Markov Chain Monte Carlo (MCMC) sampling with an Energy-Based Model (EBM) for adversarial purification.
Our contributions are 1) an improved method for training EBM's with realistic long-run MCMC samples, 2) Expectation-Over-Transformation (EOT) defense that resolves theoretical ambiguities for defenses, and 3) state-of-the-art adversarial defense for naturally-trained classifiers and competitive defense.
arXiv Detail & Related papers (2020-05-27T17:53:36Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.