Color-NeuS: Reconstructing Neural Implicit Surfaces with Color
- URL: http://arxiv.org/abs/2308.06962v2
- Date: Tue, 19 Dec 2023 14:29:58 GMT
- Title: Color-NeuS: Reconstructing Neural Implicit Surfaces with Color
- Authors: Licheng Zhong, Lixin Yang, Kailin Li, Haoyu Zhen, Mei Han, Cewu Lu
- Abstract summary: We develop a method to reconstruct object surfaces from multi-view images or monocular video.
We remove the view-dependent color from neural volume rendering while retaining volume rendering performance through a relighting network.
Results surpass those of any existing methods capable of reconstructing mesh alongside color.
- Score: 46.90825914361547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reconstruction of object surfaces from multi-view images or monocular
video is a fundamental issue in computer vision. However, much of the recent
research concentrates on reconstructing geometry through implicit or explicit
methods. In this paper, we shift our focus towards reconstructing mesh in
conjunction with color. We remove the view-dependent color from neural volume
rendering while retaining volume rendering performance through a relighting
network. Mesh is extracted from the signed distance function (SDF) network for
the surface, and color for each surface vertex is drawn from the global color
network. To evaluate our approach, we conceived a in hand object scanning task
featuring numerous occlusions and dramatic shifts in lighting conditions. We've
gathered several videos for this task, and the results surpass those of any
existing methods capable of reconstructing mesh alongside color. Additionally,
our method's performance was assessed using public datasets, including DTU,
BlendedMVS, and OmniObject3D. The results indicated that our method performs
well across all these datasets. Project page:
https://colmar-zlicheng.github.io/color_neus.
Related papers
- ASGrasp: Generalizable Transparent Object Reconstruction and Grasping from RGB-D Active Stereo Camera [9.212504138203222]
We propose ASGrasp, a 6-DoF grasp detection network that uses an RGB-D active stereo camera.
Our system distinguishes itself by its ability to directly utilize raw IR and RGB images for transparent object geometry reconstruction.
Our experiments demonstrate that ASGrasp can achieve over 90% success rate for generalizable transparent object grasping.
arXiv Detail & Related papers (2024-05-09T09:44:51Z) - Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction [51.3632308129838]
We present Total-Decom, a novel method for decomposed 3D reconstruction with minimal human interaction.
Our approach seamlessly integrates the Segment Anything Model (SAM) with hybrid implicit-explicit neural surface representations and a mesh-based region-growing technique for accurate 3D object decomposition.
We extensively evaluate our method on benchmark datasets and demonstrate its potential for downstream applications, such as animation and scene editing.
arXiv Detail & Related papers (2024-03-28T11:12:33Z) - NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects [63.04781030984006]
Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of rendering photo-realistic novel view images from a monocular RGB video of a dynamic scene.
We address the limitation by reformulating the neural radiance field function to be conditioned on surface position and orientation in the observation space.
We evaluate our model based on the novel view synthesis quality with a self-collected dataset of different moving specular objects in realistic environments.
arXiv Detail & Related papers (2023-03-25T11:03:53Z) - Learning a Room with the Occ-SDF Hybrid: Signed Distance Function
Mingled with Occupancy Aids Scene Representation [46.635542063913185]
Implicit neural rendering, which uses signed distance function representation with geometric priors, has led to impressive progress in the surface reconstruction of large-scale scenes.
We conduct experiments to identify limitations of the original color rendering loss and priors-embedded SDF scene representation.
We propose a feature-based color rendering loss that utilizes non-zero feature values to bring back optimization signals.
arXiv Detail & Related papers (2023-03-16T08:34:02Z) - Generative Scene Synthesis via Incremental View Inpainting using RGBD
Diffusion Models [39.23531919945332]
In this work, we present a new solution that sequentially generates novel RGBD views along a camera trajectory.
Each rendered RGBD view is later back-projected as a partial surface and is supplemented into the intermediate mesh.
The use of intermediate mesh and camera projection helps solve the refractory problem of multi-view inconsistency.
arXiv Detail & Related papers (2022-12-12T15:50:00Z) - Efficient Textured Mesh Recovery from Multiple Views with Differentiable
Rendering [8.264851594332677]
We propose an efficient coarse-to-fine approach to recover the textured mesh from multi-view images.
We optimize the shape geometry by minimizing the difference between the rendered mesh with the depth predicted by the learning-based multi-view stereo algorithm.
In contrast to the implicit neural representation on shape and color, we introduce a physically based inverse rendering scheme to jointly estimate the lighting and reflectance of the objects.
arXiv Detail & Related papers (2022-05-25T03:33:55Z) - High-resolution Iterative Feedback Network for Camouflaged Object
Detection [128.893782016078]
Spotting camouflaged objects that are visually assimilated into the background is tricky for object detection algorithms.
We aim to extract the high-resolution texture details to avoid the detail degradation that causes blurred vision in edges and boundaries.
We introduce a novel HitNet to refine the low-resolution representations by high-resolution features in an iterative feedback manner.
arXiv Detail & Related papers (2022-03-22T11:20:21Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - Neural RGB-D Surface Reconstruction [15.438678277705424]
Methods which learn a neural radiance field have shown amazing image synthesis results, but the underlying geometry representation is only a coarse approximation of the real geometry.
We demonstrate how depth measurements can be incorporated into the radiance field formulation to produce more detailed and complete reconstruction results.
arXiv Detail & Related papers (2021-04-09T18:00:01Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.