RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color
Editing of 3D Scenes
- URL: http://arxiv.org/abs/2301.07958v3
- Date: Mon, 18 Sep 2023 17:28:42 GMT
- Title: RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color
Editing of 3D Scenes
- Authors: Bingchen Gong and Yuehao Wang and Xiaoguang Han and Qi Dou
- Abstract summary: We present RecolorNeRF, a novel user-friendly color editing approach for neural radiance fields.
Our key idea is to decompose the scene into a set of pure-colored layers, forming a palette.
To support efficient palette-based editing, the color of each layer needs to be as representative as possible.
- Score: 21.284044381058575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radiance fields have gradually become a main representation of media.
Although its appearance editing has been studied, how to achieve
view-consistent recoloring in an efficient manner is still under explored. We
present RecolorNeRF, a novel user-friendly color editing approach for the
neural radiance fields. Our key idea is to decompose the scene into a set of
pure-colored layers, forming a palette. By this means, color manipulation can
be conducted by altering the color components of the palette directly. To
support efficient palette-based editing, the color of each layer needs to be as
representative as possible. In the end, the problem is formulated as an
optimization problem, where the layers and their blending weights are jointly
optimized with the NeRF itself. Extensive experiments show that our
jointly-optimized layer decomposition can be used against multiple backbones
and produce photo-realistic recolored novel-view renderings. We demonstrate
that RecolorNeRF outperforms baseline methods both quantitatively and
qualitatively for color editing even in complex real-world scenes.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - IReNe: Instant Recoloring of Neural Radiance Fields [54.94866137102324]
We introduce IReNe, enabling swift, near real-time color editing in NeRF.
We leverage a pre-trained NeRF model and a single training image with user-applied color edits.
This adjustment allows the model to generate new scene views, accurately representing the color changes from the training image.
arXiv Detail & Related papers (2024-05-30T09:30:28Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - LAENeRF: Local Appearance Editing for Neural Radiance Fields [4.681790910494339]
LAENeRF is a framework for photorealistic and non-photorealistic appearance editing of NeRFs.
We learn a mapping from expected ray terminations to final output color, which can be supervised by a style loss.
Relying on a single point per ray for our mapping, we limit memory requirements and enable fast optimization.
arXiv Detail & Related papers (2023-12-15T16:23:42Z) - Learning Naturally Aggregated Appearance for Efficient 3D Editing [94.47518916521065]
We propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image.
To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup.
Our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - PaletteNeRF: Palette-based Color Editing for NeRFs [16.49512200561126]
We propose a simple but effective extension of vanilla NeRF, named PaletteNeRF, to enable efficient color editing on NeRF-represented scenes.
Our method achieves efficient, view-consistent, and artifact-free color editing on a wide range of NeRF-represented scenes.
arXiv Detail & Related papers (2022-12-25T08:01:03Z) - PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields [60.66412075837952]
We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
arXiv Detail & Related papers (2022-12-21T00:20:01Z) - Hierarchical Vectorization for Portrait Images [12.32304366243904]
We propose a novel vectorization method that can automatically convert images into a 3-tier hierarchical representation.
The base layer consists of a set of sparse diffusion curves which characterize salient geometric features and low-frequency colors.
The middle level encodes specular highlights and shadows to large and editable Poisson regions (PR) and allows the user to directly adjust illumination.
The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation.
arXiv Detail & Related papers (2022-05-24T07:58:41Z) - Flexible Portrait Image Editing with Fine-Grained Control [12.32304366243904]
We develop a new method for portrait image editing, which supports fine-grained editing of geometries, colors, lights and shadows using a single neural network model.
We adopt a novel asymmetric conditional GAN architecture: the generators take the transformed conditional inputs, such as edge maps, color palette, sliders and masks, that can be directly edited by the user.
We demonstrate the effectiveness of our method by evaluating it on the CelebAMask-HQ dataset with a wide range of tasks, including geometry/color/shadow/light editing, hand-drawn sketch to image translation, and color transfer.
arXiv Detail & Related papers (2022-04-04T08:39:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.