OBJVanish: Physically Realizable Text-to-3D Adv. Generation of LiDAR-Invisible Objects
- URL: http://arxiv.org/abs/2510.06952v1
- Date: Wed, 08 Oct 2025 12:35:35 GMT
- Title: OBJVanish: Physically Realizable Text-to-3D Adv. Generation of LiDAR-Invisible Objects
- Authors: Bing Li, Wuqi Wang, Yanan Zhang, Jingzheng Li, Haigen Min, Wei Feng, Xingyu Zhao, Jie Zhang, Qing Guo,
- Abstract summary: We introduce the text-to-3D adversarial generation method to generate 3D models of objects truly invisible to LiDAR detectors.<n>We show that our approach can generate 3D pedestrians that evade six state-of-the-art (SOTA) LiDAR 3D detectors in both CARLA simulation and physical environments.
- Score: 27.16863296560931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LiDAR-based 3D object detectors are fundamental to autonomous driving, where failing to detect objects poses severe safety risks. Developing effective 3D adversarial attacks is essential for thoroughly testing these detection systems and exposing their vulnerabilities before real-world deployment. However, existing adversarial attacks that add optimized perturbations to 3D points have two critical limitations: they rarely cause complete object disappearance and prove difficult to implement in physical environments. We introduce the text-to-3D adversarial generation method, a novel approach enabling physically realizable attacks that can generate 3D models of objects truly invisible to LiDAR detectors and be easily realized in the real world. Specifically, we present the first empirical study that systematically investigates the factors influencing detection vulnerability by manipulating the topology, connectivity, and intensity of individual pedestrian 3D models and combining pedestrians with multiple objects within the CARLA simulation environment. Building on the insights, we propose the physically-informed text-to-3D adversarial generation (Phy3DAdvGen) that systematically optimizes text prompts by iteratively refining verbs, objects, and poses to produce LiDAR-invisible pedestrians. To ensure physical realizability, we construct a comprehensive object pool containing 13 3D models of real objects and constrain Phy3DAdvGen to generate 3D objects based on combinations of objects in this set. Extensive experiments demonstrate that our approach can generate 3D pedestrians that evade six state-of-the-art (SOTA) LiDAR 3D detectors in both CARLA simulation and physical environments, thereby highlighting vulnerabilities in safety-critical applications.
Related papers
- PLOT: Pseudo-Labeling via Video Object Tracking for Scalable Monocular 3D Object Detection [35.524943073010675]
Monocular 3D object detection (M3OD) has long faced challenges due to data scarcity caused by high annotation costs and inherent 2D-to-3D ambiguity.<n>We propose a novel pseudo-labeling framework that uses only video data and is more robust to occlusion, without requiring a multi-view setup, additional sensors, camera poses, or domain-specific training.
arXiv Detail & Related papers (2025-07-03T07:46:39Z) - The Meeseeks Mesh: Spatially Consistent 3D Adversarial Objects for BEV Detector [37.74333887056029]
3D object detection is a critical component in autonomous driving systems.<n>In this paper, we investigate the vulnerability of 3D object detection models to 3D adversarial attacks.<n>We generate non-invasive 3D adversarial objects tailored for real-world attack scenarios.
arXiv Detail & Related papers (2025-05-28T15:49:54Z) - DriveGEN: Generalized and Robust 3D Detection in Driving via Controllable Text-to-Image Diffusion Generation [49.32104127246474]
DriveGEN is a training-free controllable Text-to-Image Diffusion Generation.<n>It consistently preserves objects with precise 3D geometry across diverse Out-of-Distribution generations.
arXiv Detail & Related papers (2025-03-14T06:35:38Z) - Street Gaussians without 3D Object Tracker [84.89933388445185]
Most existing methods rely on labor-intensive manual labeling of object poses.<n>We propose a stable object tracking module by leveraging associations from 2D deep trackers within a 3D object fusion strategy.<n>We address inevitable tracking errors by further introducing a motion learning strategy in an implicit feature space that autonomously corrects trajectory errors and recovers missed detections.
arXiv Detail & Related papers (2024-12-07T05:49:42Z) - HD-OOD3D: Supervised and Unsupervised Out-of-Distribution object detection in LiDAR data [6.372261626436676]
HD-OOD3D is a novel two-stage method for detecting unknown objects.<n>To address the challenge of scaling the learning of unknown objects, we explore unsupervised training strategies to generate pseudo-labels for unknowns.
arXiv Detail & Related papers (2024-10-31T09:29:55Z) - SHIFT3D: Synthesizing Hard Inputs For Tricking 3D Detectors [37.80095745939221]
We present SHIFT3D, a differentiable pipeline for generating 3D shapes that are structurally plausible yet challenging to 3D object detectors.
In safety-critical applications like autonomous driving, discovering such novel challenging objects can offer insight into unknown vulnerabilities of 3D detectors.
arXiv Detail & Related papers (2023-09-11T20:28:18Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Aerial Monocular 3D Object Detection [67.20369963664314]
DVDET is proposed to achieve aerial monocular 3D object detection in both the 2D image space and the 3D physical space.<n>To address the severe view deformation issue, we propose a novel trainable geo-deformable transformation module.<n>To encourage more researchers to investigate this area, we will release the dataset and related code.
arXiv Detail & Related papers (2022-08-08T08:32:56Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.