Targeted Adversarial Perturbations for Monocular Depth Prediction
- URL: http://arxiv.org/abs/2006.08602v2
- Date: Tue, 8 Dec 2020 22:28:46 GMT
- Title: Targeted Adversarial Perturbations for Monocular Depth Prediction
- Authors: Alex Wong, Safa Cicek, Stefano Soatto
- Abstract summary: We study the effect of adversarial perturbations on the task of monocular depth prediction.
Specifically, we explore the ability of small, imperceptible additive perturbations to selectively alter the perceived geometry of the scene.
We show that such perturbations can not only globally re-scale the predicted distances from the camera, but also alter the prediction to match a different target scene.
- Score: 74.61708733460927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the effect of adversarial perturbations on the task of monocular
depth prediction. Specifically, we explore the ability of small, imperceptible
additive perturbations to selectively alter the perceived geometry of the
scene. We show that such perturbations can not only globally re-scale the
predicted distances from the camera, but also alter the prediction to match a
different target scene. We also show that, when given semantic or instance
information, perturbations can fool the network to alter the depth of specific
categories or instances in the scene, and even remove them while preserving the
rest of the scene. To understand the effect of targeted perturbations, we
conduct experiments on state-of-the-art monocular depth prediction methods. Our
experiments reveal vulnerabilities in monocular depth prediction networks, and
shed light on the biases and context learned by them.
Related papers
- Adversarial Attacks on Monocular Pose Estimation [13.7258515433446]
We study the relation between adversarial perturbations targeting monocular depth and pose estimation networks.
Our experiments show how the generated perturbations lead to notable errors in relative rotation and translation predictions.
arXiv Detail & Related papers (2022-07-14T16:12:31Z) - Dimensions of Motion: Learning to Predict a Subspace of Optical Flow
from a Single Image [50.9686256513627]
We introduce the problem of predicting, from a single video frame, a low-dimensional subspace of optical flow which includes the actual instantaneous optical flow.
We show how several natural scene assumptions allow us to identify an appropriate flow subspace via a set of basis flow fields parameterized by disparity.
This provides a new approach to learning these tasks in an unsupervised fashion using monocular input video without requiring camera intrinsics or poses.
arXiv Detail & Related papers (2021-12-02T18:52:54Z) - Unsupervised Monocular Depth Perception: Focusing on Moving Objects [5.489557739480878]
In this paper, we show that deliberately manipulating photometric errors can efficiently deal with difficulties better.
We first propose an outlier masking technique that considers the occluded or dynamic pixels as statistical outliers in the photometric error map.
With the outlier masking, the network learns the depth of objects that move in the opposite direction to the camera more accurately.
arXiv Detail & Related papers (2021-08-30T08:45:02Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Adversarial Patch Attacks on Monocular Depth Estimation Networks [7.089737454146505]
We propose a method of adversarial patch attack on monocular depth estimation.
We generate artificial patterns that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed.
Our method can be implemented in the real world by physically placing the printed patterns in real scenes.
arXiv Detail & Related papers (2020-10-06T22:56:22Z) - SAFENet: Self-Supervised Monocular Depth Estimation with Semantic-Aware
Feature Extraction [27.750031877854717]
We propose SAFENet that is designed to leverage semantic information to overcome the limitations of the photometric loss.
Our key idea is to exploit semantic-aware depth features that integrate the semantic and geometric knowledge.
Experiments on KITTI dataset demonstrate that our methods compete or even outperform the state-of-the-art methods.
arXiv Detail & Related papers (2020-10-06T17:22:25Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z) - Calibrating Self-supervised Monocular Depth Estimation [77.77696851397539]
In the recent years, many methods demonstrated the ability of neural networks to learn depth and pose changes in a sequence of images, using only self-supervision as the training signal.
We show that incorporating prior information about the camera configuration and the environment, we can remove the scale ambiguity and predict depth directly, still using the self-supervised formulation and not relying on any additional sensors.
arXiv Detail & Related papers (2020-09-16T14:35:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.