RemedyGS: Defend 3D Gaussian Splatting against Computation Cost Attacks
- URL: http://arxiv.org/abs/2511.22147v1
- Date: Thu, 27 Nov 2025 06:35:08 GMT
- Title: RemedyGS: Defend 3D Gaussian Splatting against Computation Cost Attacks
- Authors: Yanping Li, Zhening Liu, Zijian Li, Zehong Lin, Jun Zhang,
- Abstract summary: We propose the first effective and comprehensive black-box defense framework, named RemedyGS, against computation cost attacks.<n>Our framework effectively defends against white-box, black-box, and adaptive attacks in 3DGS systems, achieving state-of-the-art performance in both safety and utility.
- Score: 15.039068315115372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a mainstream technique for 3D reconstruction, 3D Gaussian splatting (3DGS) has been applied in a wide range of applications and services. Recent studies have revealed critical vulnerabilities in this pipeline and introduced computation cost attacks that lead to malicious resource occupancies and even denial-of-service (DoS) conditions, thereby hindering the reliable deployment of 3DGS. In this paper, we propose the first effective and comprehensive black-box defense framework, named RemedyGS, against such computation cost attacks, safeguarding 3DGS reconstruction systems and services. Our pipeline comprises two key components: a detector to identify the attacked input images with poisoned textures and a purifier to recover the benign images from their attacked counterparts, mitigating the adverse effects of these attacks. Moreover, we incorporate adversarial training into the purifier to enforce distributional alignment between the recovered and original natural images, thereby enhancing the defense efficacy. Experimental results demonstrate that our framework effectively defends against white-box, black-box, and adaptive attacks in 3DGS systems, achieving state-of-the-art performance in both safety and utility.
Related papers
- AdLift: Lifting Adversarial Perturbations to Safeguard 3D Gaussian Splatting Assets Against Instruction-Driven Editing [109.07334219188222]
We propose the first editing safeguard for 3DGS, termed AdLift, which prevents instruction-driven editing across arbitrary views and dimensions.<n>We alternate between gradient truncation and image-to-Gaussian fitting, yielding consistent adversarial-based protection performance across different viewpoints.
arXiv Detail & Related papers (2025-12-08T07:41:23Z) - Adversarial Patch Attacks on Vision-Based Cargo Occupancy Estimation via Differentiable 3D Simulation [0.0]
We study the feasibility of such attacks on a convolutional cargo-occupancy classifier using fully simulated 3D environments.<n>Our experiments demonstrate that 3D-optimized patches achieve high attack success rates, especially in a denial-of-service scenario.<n>This is the first study to investigate adversarial patch attacks for cargo-occupancy estimation in physically realistic, fully simulated 3D scenes.
arXiv Detail & Related papers (2025-11-24T16:05:40Z) - GaussTrap: Stealthy Poisoning Attacks on 3D Gaussian Splatting for Targeted Scene Confusion [10.426604064131872]
This paper presents the first systematic study of backdoor threats in 3DGS pipelines.<n>We propose GuassTrap, a novel poisoning attack method targeting 3DGS models.<n>Experiments on both synthetic and real-world datasets demonstrate that GuassTrap can effectively embed imperceptible yet harmful backdoor views.
arXiv Detail & Related papers (2025-04-29T14:52:14Z) - Stealthy and Robust Backdoor Attack against 3D Point Clouds through Additional Point Features [7.066252856912398]
3D backdoor attacks have posed a substantial threat to 3D Deep Neural Networks (3D DNNs) designed for 3D point clouds.<n>This paper introduces the Stealthy and Robust Backdoor Attack (SRBA), which ensures robustness and stealthiness through intentional design considerations.
arXiv Detail & Related papers (2024-12-10T13:48:11Z) - Poison-splat: Computation Cost Attack on 3D Gaussian Splatting [90.88713193520917]
We reveal a significant security vulnerability that has been largely overlooked in 3DGS.<n>The adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training.<n>Such a computation cost attack is achieved by addressing a bi-level optimization problem through three tailored strategies.
arXiv Detail & Related papers (2024-10-10T17:57:29Z) - Securing Recommender System via Cooperative Training [78.97620275467733]
We propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data.
Considering existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems.
We put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process.
arXiv Detail & Related papers (2024-01-23T12:07:20Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.