InstantSticker: Realistic Decal Blending via Disentangled Object Reconstruction
- URL: http://arxiv.org/abs/2504.06620v1
- Date: Wed, 09 Apr 2025 06:36:36 GMT
- Title: InstantSticker: Realistic Decal Blending via Disentangled Object Reconstruction
- Authors: Yi Zhang, Xiaoyang Huang, Yishun Dou, Yue Shi, Rui Shi, Ye Chen, Bingbing Ni, Wenjun Zhang,
- Abstract summary: We present InstantSticker, a reconstruction pipeline based on Image-Based Lighting (IBL), which focuses on highly realistic decal blending.<n>For instant editing, we utilize the Disney BRDF model, explicitly defining material colors with 3-channel diffuse albedo.<n>In our experiment, we introduce the Ratio Variance Warping (RVW) metric to evaluate the local geometric warping of the decal area.
- Score: 43.9394028022126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present InstantSticker, a disentangled reconstruction pipeline based on Image-Based Lighting (IBL), which focuses on highly realistic decal blending, simulates stickers attached to the reconstructed surface, and allows for instant editing and real-time rendering. To achieve stereoscopic impression of the decal, we introduce shadow factor into IBL, which can be adaptively optimized during training. This allows the shadow brightness of surfaces to be accurately decomposed rather than baked into the diffuse color, ensuring that the edited texture exhibits authentic shading. To address the issues of warping and blurriness in previous methods, we apply As-Rigid-As-Possible (ARAP) parameterization to pre-unfold a specified area of the mesh and use the local UV mapping combined with a neural texture map to enhance the ability to express high-frequency details in that area. For instant editing, we utilize the Disney BRDF model, explicitly defining material colors with 3-channel diffuse albedo. This enables instant replacement of albedo RGB values during the editing process, avoiding the prolonged optimization required in previous approaches. In our experiment, we introduce the Ratio Variance Warping (RVW) metric to evaluate the local geometric warping of the decal area. Extensive experimental results demonstrate that our method surpasses previous decal blending methods in terms of editing quality, editing speed and rendering speed, achieving the state-of-the-art.
Related papers
- Real-time Neural Rendering of LiDAR Point Clouds [0.2621434923709917]
A naive projection of the point cloud to the output view using 1x1 pixels is fast and retains the available detail, but also results in unintelligible renderings as background points leak in between the foreground pixels.<n>A deep convolutional model in the form of a U-Net is used to transform these projections into a realistic result.<n>We also describe a method to generate synthetic training data to deal with imperfectly-aligned ground truth images.
arXiv Detail & Related papers (2025-02-17T10:01:13Z) - EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading [50.331929164207324]
We introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading.
Both qualitative and quantitative experiments demonstrate the superior performance of DeferredGS in novel view and editing tasks.
arXiv Detail & Related papers (2024-04-15T01:58:54Z) - Ternary-Type Opacity and Hybrid Odometry for RGB NeRF-SLAM [58.736472371951955]
We introduce a ternary-type opacity (TT) model, which categorizes points on a ray intersecting a surface into three regions: before, on, and behind the surface.
This enables a more accurate rendering of depth, subsequently improving the performance of image warping techniques.
Our integrated approach of TT and HO achieves state-of-the-art performance on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-20T18:03:17Z) - LAENeRF: Local Appearance Editing for Neural Radiance Fields [4.681790910494339]
LAENeRF is a framework for photorealistic and non-photorealistic appearance editing of NeRFs.
We learn a mapping from expected ray terminations to final output color, which can be supervised by a style loss.
Relying on a single point per ray for our mapping, we limit memory requirements and enable fast optimization.
arXiv Detail & Related papers (2023-12-15T16:23:42Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color
Editing of 3D Scenes [21.284044381058575]
We present RecolorNeRF, a novel user-friendly color editing approach for neural radiance fields.
Our key idea is to decompose the scene into a set of pure-colored layers, forming a palette.
To support efficient palette-based editing, the color of each layer needs to be as representative as possible.
arXiv Detail & Related papers (2023-01-19T09:18:06Z) - PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields [60.66412075837952]
We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
arXiv Detail & Related papers (2022-12-21T00:20:01Z) - Hierarchical Vectorization for Portrait Images [12.32304366243904]
We propose a novel vectorization method that can automatically convert images into a 3-tier hierarchical representation.
The base layer consists of a set of sparse diffusion curves which characterize salient geometric features and low-frequency colors.
The middle level encodes specular highlights and shadows to large and editable Poisson regions (PR) and allows the user to directly adjust illumination.
The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation.
arXiv Detail & Related papers (2022-05-24T07:58:41Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.