GaussianSeal: Rooting Adaptive Watermarks for 3D Gaussian Generation Model
- URL: http://arxiv.org/abs/2503.00531v2
- Date: Wed, 24 Sep 2025 11:02:27 GMT
- Title: GaussianSeal: Rooting Adaptive Watermarks for 3D Gaussian Generation Model
- Authors: Runyi Li, Xuanyu Zhang, Chuhan Tong, Zhipei Xu, Jian Zhang,
- Abstract summary: We propose the first bit watermarking framework for 3DGS generative models, named GaussianSeal.<n>We achieve high-precision bit decoding with minimal training overhead while maintaining the fidelity of the model's outputs.
- Score: 19.100784347870004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advancement of AIGC technologies, the modalities generated by models have expanded from images and videos to 3D objects, leading to an increasing number of works focused on 3D Gaussian Splatting (3DGS) generative models. Existing research on copyright protection for generative models has primarily concentrated on watermarking in image and text modalities, with little exploration into the copyright protection of 3D object generative models. In this paper, we propose the first bit watermarking framework for 3DGS generative models, named GaussianSeal, to enable the decoding of bits as copyright identifiers from the rendered outputs of generated 3DGS. By incorporating adaptive bit modulation modules into the generative model and embedding them into the network blocks in an adaptive way, we achieve high-precision bit decoding with minimal training overhead while maintaining the fidelity of the model's outputs. Experiments demonstrate that our method outperforms post-processing watermarking approaches for 3DGS objects, achieving superior performance of watermark decoding accuracy and preserving the quality of the generated results.
Related papers
- Off The Grid: Detection of Primitives for Feed-Forward 3D Gaussian Splatting [33.7339252839354]
We introduce a new feed-forward architecture that detects 3D Gaussian primitives at a sub-pixel level.<n>Inspired by keypoint detection, our decoder learns to distribute primitives across image patches.<n>Our resulting pose-free model generates scenes in seconds, achieving state-of-the-art novel view synthesis for feed-forward models.
arXiv Detail & Related papers (2025-12-17T14:59:21Z) - MarkSplatter: Generalizable Watermarking for 3D Gaussian Splatting Model via Splatter Image Structure [27.59237608604465]
Current 3DGS watermarking methods rely on computationally expensive fine-tuning procedures for each predefined message.<n>We propose the first generalizable watermarking framework that enables efficient protection of Splatter Image-based 3DGS models through a single forward pass.
arXiv Detail & Related papers (2025-08-31T09:12:06Z) - NovelGS: Consistent Novel-view Denoising via Large Gaussian Reconstruction Model [57.92709692193132]
NovelGS is a diffusion model for Gaussian Splatting given sparse-view images.
We leverage the novel view denoising through a transformer-based network to generate 3D Gaussians.
arXiv Detail & Related papers (2024-11-25T07:57:17Z) - GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting [41.90891053671943]
Digital watermarking techniques can be applied to embed ownership information discreetly within 3DGS models.
Naively embedding the watermark on a pre-trained 3DGS can cause obvious distortion in rendered images.
We propose an uncertainty-based method that constrains the perturbation of model parameters to achieve invisible watermarking for 3DGS.
arXiv Detail & Related papers (2024-10-31T08:08:54Z) - 3D-GSW: 3D Gaussian Splatting for Robust Watermarking [5.52538716292462]
We introduce a robust watermarking method for 3D-GS that secures ownership of both the model and its rendered images.<n>Our proposed method remains robust against distortions in rendered images and model attacks while maintaining high rendering quality.
arXiv Detail & Related papers (2024-09-20T05:16:06Z) - Large Point-to-Gaussian Model for Image-to-3D Generation [48.95861051703273]
We propose a large Point-to-Gaussian model, that inputs the initial point cloud produced from large 3D diffusion model conditional on 2D image.
The point cloud provides initial 3D geometry prior for Gaussian generation, thus significantly facilitating image-to-3D Generation.
arXiv Detail & Related papers (2024-08-20T15:17:53Z) - AGG: Amortized Generative 3D Gaussians for Single Image to 3D [108.38567665695027]
We introduce an Amortized Generative 3D Gaussian framework (AGG) that instantly produces 3D Gaussians from a single image.
AGG decomposes the generation of 3D Gaussian locations and other appearance attributes for joint optimization.
We propose a cascaded pipeline that first generates a coarse representation of the 3D data and later upsamples it with a 3D Gaussian super-resolution module.
arXiv Detail & Related papers (2024-01-08T18:56:33Z) - DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation [55.661467968178066]
We propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously.
Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space.
In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks.
arXiv Detail & Related papers (2023-09-28T17:55:05Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - Training and Tuning Generative Neural Radiance Fields for Attribute-Conditional 3D-Aware Face Generation [66.21121745446345]
We propose a conditional GNeRF model that integrates specific attribute labels as input, thus amplifying the controllability and disentanglement capabilities of 3D-aware generative models.
Our approach builds upon a pre-trained 3D-aware face model, and we introduce a Training as Init and fidelity for Tuning (TRIOT) method to train a conditional normalized flow module.
Our experiments substantiate the efficacy of our model, showcasing its ability to generate high-quality edits with enhanced view consistency.
arXiv Detail & Related papers (2022-08-26T10:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.