DreaMark: Rooting Watermark in Score Distillation Sampling Generated Neural Radiance Fields
- URL: http://arxiv.org/abs/2412.15278v1
- Date: Wed, 18 Dec 2024 03:27:13 GMT
- Title: DreaMark: Rooting Watermark in Score Distillation Sampling Generated Neural Radiance Fields
- Authors: Xingyu Zhu, Xiapu Luo, Xuetao Wei,
- Abstract summary: We propose Dreamark to embed a secret message by backdooring the NeRF during NeRF generation.
We evaluate the generation quality and watermark robustness against image- and model-level attacks.
- Score: 25.545098217655564
- License:
- Abstract: Recent advancements in text-to-3D generation can generate neural radiance fields (NeRFs) with score distillation sampling, enabling 3D asset creation without real-world data capture. With the rapid advancement in NeRF generation quality, protecting the copyright of the generated NeRF has become increasingly important. While prior works can watermark NeRFs in a post-generation way, they suffer from two vulnerabilities. First, a delay lies between NeRF generation and watermarking because the secret message is embedded into the NeRF model post-generation through fine-tuning. Second, generating a non-watermarked NeRF as an intermediate creates a potential vulnerability for theft. To address both issues, we propose Dreamark to embed a secret message by backdooring the NeRF during NeRF generation. In detail, we first pre-train a watermark decoder. Then, the Dreamark generates backdoored NeRFs in a way that the target secret message can be verified by the pre-trained watermark decoder on an arbitrary trigger viewport. We evaluate the generation quality and watermark robustness against image- and model-level attacks. Extensive experiments show that the watermarking process will not degrade the generation quality, and the watermark achieves 90+% accuracy among both image-level attacks (e.g., Gaussian noise) and model-level attacks (e.g., pruning attack).
Related papers
- RoboSignature: Robust Signature and Watermarking on Network Attacks [0.5461938536945723]
We present a novel adversarial fine-tuning attack that disrupts the model's ability to embed the intended watermark.
Our findings emphasize the importance of anticipating and defending against potential vulnerabilities in generative systems.
arXiv Detail & Related papers (2024-12-22T04:36:27Z) - Protecting NeRFs' Copyright via Plug-And-Play Watermarking Base Model [29.545874014535297]
Neural Radiance Fields (NeRFs) have become a key method for 3D scene representation.
We propose textbfNeRFProtector, which adopts a plug-and-play strategy to protect NeRF's copyright during its creation.
arXiv Detail & Related papers (2024-07-10T15:06:52Z) - WateRF: Robust Watermarks in Radiance Fields for Protection of Copyrights [10.136998438185882]
We introduce an innovative watermarking method that can be employed in both representations of NeRF.
This is achieved by fine-tuning NeRF to embed binary messages in the rendering process.
We evaluate our method in three different aspects: capacity, invisibility, and robustness of the embedded watermarks in the 2D-rendered images.
arXiv Detail & Related papers (2024-05-03T12:56:34Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - MarkNerf:Watermarking for Neural Radiance Field [6.29495604869364]
A watermarking algorithm is proposed to address the copyright protection issue of implicit 3D models.
Experimental results demonstrate that the proposed algorithm effectively safeguards the copyright of 3D models.
arXiv Detail & Related papers (2023-09-21T03:00:09Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - DReg-NeRF: Deep Registration for Neural Radiance Fields [66.69049158826677]
We propose DReg-NeRF to solve the NeRF registration problem on object-centric annotated scenes without human intervention.
Our proposed method beats the SOTA point cloud registration methods by a large margin.
arXiv Detail & Related papers (2023-08-18T08:37:49Z) - An Unforgeable Publicly Verifiable Watermark for Large Language Models [84.2805275589553]
Current watermark detection algorithms require the secret key used in the watermark generation process, making them susceptible to security breaches and counterfeiting during public detection.
We propose an unforgeable publicly verifiable watermark algorithm named UPV that uses two different neural networks for watermark generation and detection, instead of using the same key at both stages.
arXiv Detail & Related papers (2023-07-30T13:43:27Z) - PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators [42.0915430715226]
We propose Pivotal Tuning Watermarking (PTW), a method for watermarking pre-trained generators.
PTW can embed longer codes than existing methods while better preserving the generator's image quality.
We propose rigorous, game-based definitions for robustness and undetectability, and our study reveals that watermarking is not robust against an adaptive white-box attacker.
arXiv Detail & Related papers (2023-04-14T19:44:37Z) - Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures [72.44361273600207]
We adapt the score distillation to the publicly available, and computationally efficient, Latent Diffusion Models.
Latent Diffusion Models apply the entire diffusion process in a compact latent space of a pretrained autoencoder.
We show that latent score distillation can be successfully applied directly on 3D meshes.
arXiv Detail & Related papers (2022-11-14T18:25:24Z) - Sem2NeRF: Converting Single-View Semantic Masks to Neural Radiance
Fields [49.41982694533966]
We introduce a new task, Semantic-to-NeRF translation, conditioned on one single-view semantic mask as input.
In particular, Sem2NeRF addresses the highly challenging task by encoding the semantic mask into the latent code that controls the 3D scene representation of a pretrained decoder.
We verify the efficacy of the proposed Sem2NeRF and demonstrate it outperforms several strong baselines on two benchmark datasets.
arXiv Detail & Related papers (2022-03-21T09:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.