PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields
- URL: http://arxiv.org/abs/2212.10699v1
- Date: Wed, 21 Dec 2022 00:20:01 GMT
- Title: PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields
- Authors: Zhengfei Kuang, Fujun Luan, Sai Bi, Zhixin Shu, Gordon Wetzstein,
Kalyan Sunkavalli
- Abstract summary: We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
- Score: 60.66412075837952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in neural radiance fields have enabled the high-fidelity 3D
reconstruction of complex scenes for novel view synthesis. However, it remains
underexplored how the appearance of such representations can be efficiently
edited while maintaining photorealism.
In this work, we present PaletteNeRF, a novel method for photorealistic
appearance editing of neural radiance fields (NeRF) based on 3D color
decomposition. Our method decomposes the appearance of each 3D point into a
linear combination of palette-based bases (i.e., 3D segmentations defined by a
group of NeRF-type functions) that are shared across the scene. While our
palette-based bases are view-independent, we also predict a view-dependent
function to capture the color residual (e.g., specular shading). During
training, we jointly optimize the basis functions and the color palettes, and
we also introduce novel regularizers to encourage the spatial coherence of the
decomposition.
Our method allows users to efficiently edit the appearance of the 3D scene by
modifying the color palettes. We also extend our framework with compressed
semantic features for semantic-aware appearance editing. We demonstrate that
our technique is superior to baseline methods both quantitatively and
qualitatively for appearance editing of complex real-world scenes.
Related papers
- Learning Naturally Aggregated Appearance for Efficient 3D Editing [94.47518916521065]
We propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image.
To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup.
Our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - Mesh-Guided Neural Implicit Field Editing [42.78979161815414]
We propose a new approach that employs a mesh as a guiding mechanism in editing the neural field.
We first introduce a differentiable method using marching tetrahedra for polygonal mesh extraction from the neural implicit field.
We then design a differentiable color extractor to assign colors obtained from the volume renderings to this extracted mesh.
This differentiable colored mesh allows gradient back-propagation from the explicit mesh to the implicit fields, empowering users to easily manipulate the geometry and color of neural implicit fields.
arXiv Detail & Related papers (2023-12-04T18:59:58Z) - UVA: Towards Unified Volumetric Avatar for View Synthesis, Pose
rendering, Geometry and Texture Editing [83.0396740127043]
We propose a new approach named textbfUnified textbfVolumetric textbfAvatar (textbfUVA) that enables local editing of both geometry and texture.
UVA transforms each observation point to a canonical space using a skinning motion field and represents geometry and texture in separate neural fields.
Experiments on multiple human avatars demonstrate that our UVA achieves novel view synthesis and novel pose rendering.
arXiv Detail & Related papers (2023-04-14T07:39:49Z) - RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color
Editing of 3D Scenes [21.284044381058575]
We present RecolorNeRF, a novel user-friendly color editing approach for neural radiance fields.
Our key idea is to decompose the scene into a set of pure-colored layers, forming a palette.
To support efficient palette-based editing, the color of each layer needs to be as representative as possible.
arXiv Detail & Related papers (2023-01-19T09:18:06Z) - Decomposing NeRF for Editing via Feature Field Distillation [14.628761232614762]
editing a scene represented by a NeRF is challenging as the underlying connectionist representations are not object-centric or compositional.
In this work, we tackle the problem of semantic scene decomposition of NeRFs to enable query-based local editing.
We propose to distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors into a 3D feature field optimized in parallel to the radiance field.
arXiv Detail & Related papers (2022-05-31T07:56:09Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z) - 3D Photography using Context-aware Layered Depth Inpainting [50.66235795163143]
We propose a method for converting a single RGB-D input image into a 3D photo.
A learning-based inpainting model synthesizes new local color-and-depth content into the occluded region.
The resulting 3D photos can be efficiently rendered with motion parallax.
arXiv Detail & Related papers (2020-04-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.