DepthVanish: Optimizing Adversarial Interval Structures for Stereo-Depth-Invisible Patches
- URL: http://arxiv.org/abs/2506.16690v1
- Date: Fri, 20 Jun 2025 02:22:21 GMT
- Title: DepthVanish: Optimizing Adversarial Interval Structures for Stereo-Depth-Invisible Patches
- Authors: Yun Xing, Yue Cao, Nhat Chung, Jie Zhang, Ivor Tsang, Ming-Ming Cheng, Yang Liu, Lei Ma, Qing Guo,
- Abstract summary: Adversarial attacks against stereo depth estimation can help reveal vulnerabilities before deployment.<n>We develop a novel stereo depth attack that jointly optimize both the striped structure and texture elements.<n>Our patch can also attack commercial RGB-D cameras (Intel RealSense) in real-world conditions.
- Score: 52.324773418994575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stereo Depth estimation is a critical task in autonomous driving and robotics, where inaccuracies (such as misidentifying nearby objects as distant) can lead to dangerous situations. Adversarial attacks against stereo depth estimation can help reveal vulnerabilities before deployment. Previous work has shown that repeating optimized textures can effectively mislead stereo depth estimation in digital settings. However, our research reveals that these naively repeated texture structures perform poorly in physical-world implementations, i.e., when deployed as patches, limiting their practical utility for testing stereo depth estimation systems. In this work, for the first time, we discover that introducing regular intervals between repeated textures, creating a striped structure, significantly enhances the patch attack effectiveness. Through extensive experimentation, we analyze how variations of this novel structure influence the performance. Based on these insights, we develop a novel stereo depth attack that jointly optimizes both the striped structure and texture elements. Our generated adversarial patches can be inserted into any scenes and successfully attack state-of-the-art stereo depth estimation methods, i.e., RAFT-Stereo and STTR. Most critically, our patch can also attack commercial RGB-D cameras (Intel RealSense) in real-world conditions, demonstrating their practical relevance for security assessment of stereo systems.
Related papers
- HazyDet: Open-Source Benchmark for Drone-View Object Detection with Depth-Cues in Hazy Scenes [54.24350833692194]
HazyDet is the first, large-scale benchmark specifically designed for drone-view object detection in hazy conditions.<n>We propose the Depth-Conditioned Detector (DeCoDet) to address the severe visual degradation induced by haze.<n>HazyDet provides a challenging and realistic testbed for advancing detection algorithms.
arXiv Detail & Related papers (2024-09-30T00:11:40Z) - Stereo-Depth Fusion through Virtual Pattern Projection [37.519762078762575]
This paper presents a novel general-purpose stereo and depth data fusion paradigm.
It mimics the active stereo principle by replacing the unreliable physical pattern projector with a depth sensor.
It works by projecting virtual patterns consistent with the scene geometry onto the left and right images acquired by a conventional stereo camera.
arXiv Detail & Related papers (2024-06-06T17:59:58Z) - Dusk Till Dawn: Self-supervised Nighttime Stereo Depth Estimation using Visual Foundation Models [16.792458193160407]
Self-supervised depth estimation algorithms rely heavily on frame-warping relationships.
We introduce an algorithm designed to achieve accurate self-supervised stereo depth estimation focusing on nighttime conditions.
arXiv Detail & Related papers (2024-05-18T03:07:23Z) - Depth-aware Volume Attention for Texture-less Stereo Matching [67.46404479356896]
We propose a lightweight volume refinement scheme to tackle the texture deterioration in practical outdoor scenarios.
We introduce a depth volume supervised by the ground-truth depth map, capturing the relative hierarchy of image texture.
Local fine structure and context are emphasized to mitigate ambiguity and redundancy during volume aggregation.
arXiv Detail & Related papers (2024-02-14T04:07:44Z) - CarPatch: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle
Components [77.33782775860028]
We introduce CarPatch, a novel synthetic benchmark of vehicles.
In addition to a set of images annotated with their intrinsic and extrinsic camera parameters, the corresponding depth maps and semantic segmentation masks have been generated for each view.
Global and part-based metrics have been defined and used to evaluate, compare, and better characterize some state-of-the-art techniques.
arXiv Detail & Related papers (2023-07-24T11:59:07Z) - DynamicStereo: Consistent Dynamic Depth from Stereo Videos [91.1804971397608]
We propose DynamicStereo to estimate disparity for stereo videos.
The network learns to pool information from neighboring frames to improve the temporal consistency of its predictions.
We also introduce Dynamic Replica, a new benchmark dataset containing synthetic videos of people and animals in scanned environments.
arXiv Detail & Related papers (2023-05-03T17:40:49Z) - Self-Supervised Depth Completion for Active Stereo [55.79929735390945]
Active stereo systems are widely used in the robotics industry due to their low cost and high quality depth maps.
These depth sensors suffer from stereo artefacts and do not provide dense depth estimates.
We present the first self-supervised depth completion method for active stereo systems that predicts accurate dense depth maps.
arXiv Detail & Related papers (2021-10-07T07:33:52Z) - Monocular Depth Estimators: Vulnerabilities and Attacks [6.821598757786515]
Recent advancements of neural networks lead to reliable monocular depth estimation.
Deep neural networks are highly vulnerable to adversarial samples for tasks like classification, detection and segmentation.
In this paper, we investigate the annihilation of the most state-of-the-art monocular depth estimation networks against adversarial attacks.
arXiv Detail & Related papers (2020-05-28T21:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.