LAENeRF: Local Appearance Editing for Neural Radiance Fields
- URL: http://arxiv.org/abs/2312.09913v2
- Date: Mon, 25 Mar 2024 14:09:09 GMT
- Title: LAENeRF: Local Appearance Editing for Neural Radiance Fields
- Authors: Lukas Radl, Michael Steiner, Andreas Kurz, Markus Steinberger,
- Abstract summary: LAENeRF is a framework for photorealistic and non-photorealistic appearance editing of NeRFs.
We learn a mapping from expected ray terminations to final output color, which can be supervised by a style loss.
Relying on a single point per ray for our mapping, we limit memory requirements and enable fast optimization.
- Score: 4.681790910494339
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to the omnipresence of Neural Radiance Fields (NeRFs), the interest towards editable implicit 3D representations has surged over the last years. However, editing implicit or hybrid representations as used for NeRFs is difficult due to the entanglement of appearance and geometry encoded in the model parameters. Despite these challenges, recent research has shown first promising steps towards photorealistic and non-photorealistic appearance edits. The main open issues of related work include limited interactivity, a lack of support for local edits and large memory requirements, rendering them less useful in practice. We address these limitations with LAENeRF, a unified framework for photorealistic and non-photorealistic appearance editing of NeRFs. To tackle local editing, we leverage a voxel grid as starting point for region selection. We learn a mapping from expected ray terminations to final output color, which can optionally be supervised by a style loss, resulting in a framework which can perform photorealistic and non-photorealistic appearance editing of selected regions. Relying on a single point per ray for our mapping, we limit memory requirements and enable fast optimization. To guarantee interactivity, we compose the output color using a set of learned, modifiable base colors, composed with additive layer mixing. Compared to concurrent work, LAENeRF enables recoloring and stylization while keeping processing time low. Furthermore, we demonstrate that our approach surpasses baseline methods both quantitatively and qualitatively.
Related papers
- IReNe: Instant Recoloring of Neural Radiance Fields [54.94866137102324]
We introduce IReNe, enabling swift, near real-time color editing in NeRF.
We leverage a pre-trained NeRF model and a single training image with user-applied color edits.
This adjustment allows the model to generate new scene views, accurately representing the color changes from the training image.
arXiv Detail & Related papers (2024-05-30T09:30:28Z) - NeRF-Insert: 3D Local Editing with Multimodal Control Signals [97.91172669905578]
NeRF-Insert is a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control.
We cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved.
Our results show better visual quality and also maintain stronger consistency with the original NeRF.
arXiv Detail & Related papers (2024-04-30T02:04:49Z) - Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural
Radiance Fields [7.678022563694719]
SealD-NeRF is an extension of Seal-3D for pixel-level editing in dynamic settings.
It allows for consistent edits across sequences by mapping editing actions to a specific timeframe.
arXiv Detail & Related papers (2024-02-21T03:45:18Z) - ZONE: Zero-Shot Instruction-Guided Local Editing [56.56213730578504]
We propose a Zero-shot instructiON-guided local image Editing approach, termed ZONE.
We first convert the editing intent from the user-provided instruction into specific image editing regions through InstructPix2Pix.
We then propose a Region-IoU scheme for precise image layer extraction from an off-the-shelf segment model.
arXiv Detail & Related papers (2023-12-28T02:54:34Z) - SeamlessNeRF: Stitching Part NeRFs with Gradient Propagation [21.284044381058575]
We propose SeamlessNeRF, a novel approach for seamless appearance blending of multiple NeRFs.
In specific, we aim to optimize the appearance of a target radiance field in order to harmonize its merge with a source field.
Our approach can effectively propagate the source appearance from the boundary area to the entire target field through the gradients.
arXiv Detail & Related papers (2023-10-30T15:52:35Z) - ProteusNeRF: Fast Lightweight NeRF Editing using 3D-Aware Image Context [26.07841568311428]
We present a very simple but effective neural network architecture that is fast and efficient while maintaining a low memory footprint.
Our representation allows straightforward object selection via semantic feature distillation at the training stage.
We propose a local 3D-aware image context to facilitate view-consistent image editing that can then be distilled into fine-tuned NeRFs.
arXiv Detail & Related papers (2023-10-15T21:54:45Z) - RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color
Editing of 3D Scenes [21.284044381058575]
We present RecolorNeRF, a novel user-friendly color editing approach for neural radiance fields.
Our key idea is to decompose the scene into a set of pure-colored layers, forming a palette.
To support efficient palette-based editing, the color of each layer needs to be as representative as possible.
arXiv Detail & Related papers (2023-01-19T09:18:06Z) - PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields [60.66412075837952]
We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
arXiv Detail & Related papers (2022-12-21T00:20:01Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.