StegaNeRF: Embedding Invisible Information within Neural Radiance Fields
- URL: http://arxiv.org/abs/2212.01602v1
- Date: Sat, 3 Dec 2022 12:14:19 GMT
- Title: StegaNeRF: Embedding Invisible Information within Neural Radiance Fields
- Authors: Chenxin Li, Brandon Y. Feng, Zhiwen Fan, Panwang Pan, Zhangyang Wang
- Abstract summary: We present StegaNeRF, a method for steganographic information embedding in NeRF renderings.
We design an optimization framework allowing accurate hidden information extractions from images rendered by NeRF.
StegaNeRF signifies an initial exploration into the novel problem of instilling customizable, imperceptible, and recoverable information to NeRF renderings.
- Score: 61.653702733061785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in neural rendering imply a future of widespread visual data
distributions through sharing NeRF model weights. However, while common visual
data (images and videos) have standard approaches to embed ownership or
copyright information explicitly or subtly, the problem remains unexplored for
the emerging NeRF format. We present StegaNeRF, a method for steganographic
information embedding in NeRF renderings. We design an optimization framework
allowing accurate hidden information extractions from images rendered by NeRF,
while preserving its original visual quality. We perform experimental
evaluations of our method under several potential deployment scenarios, and we
further discuss the insights discovered through our analysis. StegaNeRF
signifies an initial exploration into the novel problem of instilling
customizable, imperceptible, and recoverable information to NeRF renderings,
with minimal impact to rendered images. Project page:
https://xggnet.github.io/StegaNeRF/.
Related papers
- NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions [6.9980855647933655]
supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
arXiv Detail & Related papers (2023-09-07T18:02:57Z) - NeRFuser: Large-Scale Scene Representation by NeRF Fusion [35.749208740102546]
A practical benefit of implicit visual representations like Neural Radiance Fields (NeRFs) is their memory efficiency.
We propose NeRFuser, a novel architecture for NeRF registration and blending that assumes only access to pre-generated NeRFs.
arXiv Detail & Related papers (2023-05-22T17:59:05Z) - NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds [60.1382112938132]
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views.
arXiv Detail & Related papers (2023-04-13T06:40:08Z) - PANeRF: Pseudo-view Augmentation for Improved Neural Radiance Fields
Based on Few-shot Inputs [3.818285175392197]
neural radiance fields (NeRF) have promising applications for novel views of complex scenes.
NeRF requires dense input views, typically numbering in the hundreds, for generating high-quality images.
We propose pseudo-view augmentation of NeRF, a scheme that expands a sufficient amount of data by considering the geometry of few-shot inputs.
arXiv Detail & Related papers (2022-11-23T08:01:10Z) - NeRF-RPN: A general framework for object detection in NeRFs [54.54613914831599]
NeRF-RPN aims to detect all bounding boxes of objects in a scene.
NeRF-RPN is a general framework and can be applied to detect objects without class labels.
arXiv Detail & Related papers (2022-11-21T17:02:01Z) - ActiveNeRF: Learning where to See with Uncertainty Estimation [36.209200774203005]
Recently, Neural Radiance Fields (NeRF) has shown promising performances on reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images.
We present a novel learning framework, ActiveNeRF, aiming to model a 3D scene with a constrained input budget.
arXiv Detail & Related papers (2022-09-18T12:09:15Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.