Watermarking for Neural Radiation Fields by Invertible Neural Network
- URL: http://arxiv.org/abs/2312.02456v1
- Date: Tue, 5 Dec 2023 03:14:44 GMT
- Title: Watermarking for Neural Radiation Fields by Invertible Neural Network
- Authors: Wenquan Sun, Jia Liu, Weina Dong, Lifeng Chen, Ke Niu,
- Abstract summary: A scheme for protecting the copyright of the neural radiation field is proposed using invertible neural network watermarking.
The scheme embeds a watermark in each training image to train the neural radiation field and enables the extraction of watermark information from multiple viewpoints.
- Score: 4.460129592580675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To protect the copyright of the 3D scene represented by the neural radiation field, the embedding and extraction of the neural radiation field watermark are considered as a pair of inverse problems of image transformations. A scheme for protecting the copyright of the neural radiation field is proposed using invertible neural network watermarking, which utilizes watermarking techniques for 2D images to achieve the protection of the 3D scene. The scheme embeds the watermark in the training image of the neural radiation field through the forward process in the invertible network and extracts the watermark from the image rendered by the neural radiation field using the inverse process to realize the copyright protection of both the neural radiation field and the 3D scene. Since the rendering process of the neural radiation field can cause the loss of watermark information, the scheme incorporates an image quality enhancement module, which utilizes a neural network to recover the rendered image and then extracts the watermark. The scheme embeds a watermark in each training image to train the neural radiation field and enables the extraction of watermark information from multiple viewpoints. Simulation experimental results demonstrate the effectiveness of the method.
Related papers
- Neural radiance fields-based holography [Invited] [7.6563606969349856]
This study presents a novel approach for generating holograms based on the neural radiance fields (NeRF) technique.
NeRF is a state-of-the-art technique for 3D light-field reconstruction from 2D images based on volume rendering.
We constructed a rendering pipeline directly from a 3D light field generated from 2D images by NeRF for hologram generation using deep neural networks.
arXiv Detail & Related papers (2024-03-02T08:49:02Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - MarkNerf:Watermarking for Neural Radiance Field [6.29495604869364]
A watermarking algorithm is proposed to address the copyright protection issue of implicit 3D models.
Experimental results demonstrate that the proposed algorithm effectively safeguards the copyright of 3D models.
arXiv Detail & Related papers (2023-09-21T03:00:09Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Semantic 3D-aware Portrait Synthesis and Manipulation Based on
Compositional Neural Radiance Field [55.431697263581626]
We propose a Compositional Neural Radiance Field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation.
CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image.
Compared to state-of-the-art 3D-aware GAN methods, our approach enables fine-grained semantic region manipulation, while maintaining high-quality 3D-consistent synthesis.
arXiv Detail & Related papers (2023-02-03T07:17:46Z) - GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields [100.53114092627577]
Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results.
We build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion.
arXiv Detail & Related papers (2022-12-08T13:19:11Z) - Knowledge-Free Black-Box Watermark and Ownership Proof for Image
Classification Neural Networks [9.117248639119529]
We propose a knowledge-free black-box watermarking scheme for image classification neural networks.
A delicate encoding and verification protocol is designed to ensure the scheme's knowledgable security against adversaries.
Experiment results proved the functionality-preserving capability and security of the proposed watermarking scheme.
arXiv Detail & Related papers (2022-04-09T18:09:02Z) - A Screen-Shooting Resilient Document Image Watermarking Scheme using
Deep Neural Network [6.662095297079911]
This paper proposes a novel screen-shooting resilient watermarking scheme for document image using deep neural network.
Specifically, our scheme is an end-to-end neural network with an encoder to embed watermark and a decoder to extract watermark.
An embedding strength adjustment strategy is designed to improve the visual quality of the watermarked image with little loss of extraction accuracy.
arXiv Detail & Related papers (2022-03-10T07:19:44Z) - Adversarial Domain Feature Adaptation for Bronchoscopic Depth Estimation [111.89519571205778]
In this work, we propose an alternative domain-adaptive approach to depth estimation.
Our novel two-step structure first trains a depth estimation network with labeled synthetic images in a supervised manner.
The results of our experiments show that the proposed method improves the network's performance on real images by a considerable margin.
arXiv Detail & Related papers (2021-09-24T08:11:34Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.