DefenseSplat: Enhancing the Robustness of 3D Gaussian Splatting via Frequency-Aware Filtering
- URL: http://arxiv.org/abs/2602.19323v1
- Date: Sun, 22 Feb 2026 20:00:02 GMT
- Title: DefenseSplat: Enhancing the Robustness of 3D Gaussian Splatting via Frequency-Aware Filtering
- Authors: Yiran Qiao, Yiren Lu, Yunlai Zhou, Rui Yang, Linlin Hou, Yu Yin, Jing Ma,
- Abstract summary: 3D Gaussian Splatting (3DGS) has emerged as a powerful paradigm for real-time and high-fidelity 3D reconstruction from posed images.<n>Recent studies reveal its vulnerability to adversarial corruptions in input views, where imperceptible yet consistent perturbations can drastically degrade rendering quality.<n>We propose a frequency-aware defense strategy that reconstructs training views by filtering high-frequency noise while preserving low-frequency content.
- Score: 16.115612700186023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D Gaussian Splatting (3DGS) has emerged as a powerful paradigm for real-time and high-fidelity 3D reconstruction from posed images. However, recent studies reveal its vulnerability to adversarial corruptions in input views, where imperceptible yet consistent perturbations can drastically degrade rendering quality, increase training and rendering time, and inflate memory usage, even leading to server denial-of-service. In our work, to mitigate this issue, we begin by analyzing the distinct behaviors of adversarial perturbations in the low- and high-frequency components of input images using wavelet transforms. Based on this observation, we design a simple yet effective frequency-aware defense strategy that reconstructs training views by filtering high-frequency noise while preserving low-frequency content. This approach effectively suppresses adversarial artifacts while maintaining the authenticity of the original scene. Notably, it does not significantly impair training on clean data, achieving a desirable trade-off between robustness and performance on clean inputs. Through extensive experiments under a wide range of attack intensities on multiple benchmarks, we demonstrate that our method substantially enhances the robustness of 3DGS without access to clean ground-truth supervision. By highlighting and addressing the overlooked vulnerabilities of 3D Gaussian Splatting, our work paves the way for more robust and secure 3D reconstructions.
Related papers
- Sparse View Distractor-Free Gaussian Splatting [31.812029183156245]
3D Gaussian Splatting (3DGS) enables efficient training and fast novel view in static environments.<n>We propose a framework to enhance distractor-free 3DGS under sparse-view conditions by incorporating rich prior information.
arXiv Detail & Related papers (2026-03-02T08:32:32Z) - 3D Scene Rendering with Multimodal Gaussian Splatting [7.646123713331197]
3D scene reconstruction and rendering are core tasks in computer vision, with applications spanning industrial monitoring, robotics, and autonomous driving.<n>Recent advances in 3D Gaussian Splatting have achieved impressive rendering fidelity while maintaining high computational and memory efficiency.<n>We introduce a multimodal framework that integrates RF sensing, such as automotive radar, with GS-based rendering as a more efficient and robust alternative to vision-only GS rendering.
arXiv Detail & Related papers (2026-02-19T06:49:53Z) - CSGaussian: Progressive Rate-Distortion Compression and Segmentation for 3D Gaussian Splatting [57.73006852239138]
We present the first unified framework for rate-distortion-optimized compression and segmentation of 3D Gaussian Splatting (3DGS)<n>Inspired by recent advances in rate-distortion-optimized 3DGS compression, this work integrates semantic learning into the compression pipeline to support decoder-side applications.<n>Our scheme features a lightweight implicit neural representation-based hyperprior, enabling efficient entropy coding of both color and semantic attributes.
arXiv Detail & Related papers (2026-01-19T08:21:45Z) - RobustSplat++: Decoupling Densification, Dynamics, and Illumination for In-the-Wild 3DGS [85.90134051583368]
3D Gaussian Splatting (3DGS) has gained significant attention for its real-time, photo-realistic rendering in novel-view synthesis and 3D modeling.<n>Existing methods struggle with accurately modeling in-the-wild scenes affected by transient objects and illuminations.<n>We propose RobustSplat++, a robust solution based on several critical designs.
arXiv Detail & Related papers (2025-12-04T14:05:09Z) - From Coarse to Fine: Learnable Discrete Wavelet Transforms for Efficient 3D Gaussian Splatting [5.026688852582894]
AutoOpti3DGS is a training-time framework that automatically restrains Gaussian proliferation without sacrificing visual fidelity.<n>Wavelet-driven, coarse-to-fine process delays the formation of redundant fine Gaussians.
arXiv Detail & Related papers (2025-06-29T00:27:17Z) - RobustSplat: Decoupling Densification and Dynamics for Transient-Free 3DGS [79.15416002879239]
3D Gaussian Splatting has gained significant attention for its real-time, photo-realistic rendering in novel-view synthesis and 3D modeling.<n>Existing methods struggle with accurately modeling scenes affected by transient objects, leading to artifacts in the rendered images.<n>We propose RobustSplat, a robust solution based on two critical designs.
arXiv Detail & Related papers (2025-06-03T11:13:48Z) - T-3DGS: Removing Transient Objects for 3D Scene Reconstruction [83.05271859398779]
Transient objects in video sequences can significantly degrade the quality of 3D scene reconstructions.<n>We propose T-3DGS, a novel framework that robustly filters out transient distractors during 3D reconstruction using Gaussian Splatting.
arXiv Detail & Related papers (2024-11-29T07:45:24Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.<n>3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - 3D-GSW: 3D Gaussian Splatting for Robust Watermarking [5.52538716292462]
We introduce a robust watermarking method for 3D-GS that secures copyright of both the model and its rendered images.<n>We present Frequency-Guided Densification (FGD), which removes 3D Gaussians based on their contribution to rendering quality.<n>Our method achieves superior performance in both rendering quality and watermark robustness while improving real-time rendering efficiency.
arXiv Detail & Related papers (2024-09-20T05:16:06Z) - Robust 3D Gaussian Splatting for Novel View Synthesis in Presence of Distractors [44.55317154371679]
3D Gaussian Splatting has shown impressive novel view synthesis results.
It is vulnerable to dynamic objects polluting the input data of an otherwise static scene, so called distractors.
We show that our approach is robust to various distractors and strongly improves rendering quality on distractor-polluted scenes.
arXiv Detail & Related papers (2024-08-21T15:21:27Z) - SpotlessSplats: Ignoring Distractors in 3D Gaussian Splatting [44.42317312908314]
3D Gaussian Splatting (3DGS) is a promising technique for 3D reconstruction, offering efficient training and rendering speeds.
Current methods require highly controlled environments to meet the inter-view consistency assumption of 3DGS.
We present SpotLessSplats, an approach that leverages pre-trained and general-purpose features coupled with robust optimization to effectively ignore transient distractors.
arXiv Detail & Related papers (2024-06-28T17:07:11Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.