3D Gaussian Splat Vulnerabilities
- URL: http://arxiv.org/abs/2506.00280v1
- Date: Fri, 30 May 2025 22:21:22 GMT
- Title: 3D Gaussian Splat Vulnerabilities
- Authors: Matthew Hull, Haoyang Yang, Pratham Mehta, Mansi Phute, Aeree Cho, Haoran Wang, Matthew Lau, Wenke Lee, Willian T. Lunardi, Martin Andreoni, Polo Chau,
- Abstract summary: We introduce view-dependent Gaussian appearances to embed adversarial content visible only from specific viewpoints.<n>We demonstrate DAGGER, a targeted adversarial attack directly perturbing 3D Gaussians without access to underlying training data.<n>These attacks highlight underexplored vulnerabilities in 3DGS, introducing a new potential threat to robotic learning for autonomous navigation and other safety-critical 3DGS applications.
- Score: 20.065766098524698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With 3D Gaussian Splatting (3DGS) being increasingly used in safety-critical applications, how can an adversary manipulate the scene to cause harm? We introduce CLOAK, the first attack that leverages view-dependent Gaussian appearances - colors and textures that change with viewing angle - to embed adversarial content visible only from specific viewpoints. We further demonstrate DAGGER, a targeted adversarial attack directly perturbing 3D Gaussians without access to underlying training data, deceiving multi-stage object detectors e.g., Faster R-CNN, through established methods such as projected gradient descent. These attacks highlight underexplored vulnerabilities in 3DGS, introducing a new potential threat to robotic learning for autonomous navigation and other safety-critical 3DGS applications.
Related papers
- GaussTrap: Stealthy Poisoning Attacks on 3D Gaussian Splatting for Targeted Scene Confusion [10.426604064131872]
This paper presents the first systematic study of backdoor threats in 3DGS pipelines.<n>We propose GuassTrap, a novel poisoning attack method targeting 3DGS models.<n>Experiments on both synthetic and real-world datasets demonstrate that GuassTrap can effectively embed imperceptible yet harmful backdoor views.
arXiv Detail & Related papers (2025-04-29T14:52:14Z) - SecureGS: Boosting the Security and Fidelity of 3D Gaussian Splatting Steganography [25.798754061997254]
3D Gaussian Splatting (3DGS) has emerged as a premier method for 3D representation due to its real-time rendering and high-quality outputs.<n>Traditional NeRF steganography methods fail to address the explicit nature of 3DGS since its point cloud files are publicly accessible.<n>We propose a SecureGS framework inspired by Scaffold-GS's anchor point design and neural decoding.
arXiv Detail & Related papers (2025-03-08T08:11:00Z) - Splats in Splats: Embedding Invisible 3D Watermark within Gaussian Splatting [28.790625685438677]
WaterGS is the first 3DGS watermarking framework that embeds 3D content in 3DGS itself without modifying any attributes of the vanilla 3DGS.<n>Tests indicate that WaterGS significantly outperforms existing 3D steganography techniques, with 5.31% higher scene fidelity and 3X faster rendering speed.
arXiv Detail & Related papers (2024-12-04T08:40:11Z) - Hard-Label Black-Box Attacks on 3D Point Clouds [66.52447238776482]
We introduce a novel 3D attack method based on a new spectrum-aware decision boundary algorithm to generate high-quality adversarial samples.<n>Experiments demonstrate that our attack competitively outperforms existing white/black-box attackers in terms of attack performance and adversary quality.
arXiv Detail & Related papers (2024-11-30T09:05:02Z) - Distractor-free Generalizable 3D Gaussian Splatting [26.762275313390194]
We present DGGS, a novel framework that addresses the previously unexplored challenge: $textbfDistractor-free Generalizable 3D Gaussian Splatting$ (3DGS)<n>It mitigates 3D inconsistency and training instability caused by distractor data in the cross-scenes generalizable train setting.<n>Our generalizable mask prediction even achieves an accuracy superior to existing scene-specific training methods.
arXiv Detail & Related papers (2024-11-26T17:17:41Z) - Poison-splat: Computation Cost Attack on 3D Gaussian Splatting [90.88713193520917]
We reveal a significant security vulnerability that has been largely overlooked in 3DGS.<n>The adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training.<n>Such a computation cost attack is achieved by addressing a bi-level optimization problem through three tailored strategies.
arXiv Detail & Related papers (2024-10-10T17:57:29Z) - WildGaussians: 3D Gaussian Splatting in the Wild [80.5209105383932]
We introduce WildGaussians, a novel approach to handle occlusions and appearance changes with 3DGS.
We demonstrate that WildGaussians matches the real-time rendering speed of 3DGS while surpassing both 3DGS and NeRF baselines in handling in-the-wild data.
arXiv Detail & Related papers (2024-07-11T12:41:32Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - 3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D
Point Cloud Attack [64.83391236611409]
We propose a novel 3D attack method to generate adversarial samples solely with the knowledge of class labels.
Even in the challenging hard-label setting, 3DHacker still competitively outperforms existing 3D attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2023-08-15T03:29:31Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.