ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces
- URL: http://arxiv.org/abs/2308.07868v2
- Date: Thu, 17 Aug 2023 10:50:38 GMT
- Title: ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces
- Authors: Qianyi Wu, Kaisiyuan Wang, Kejie Li, Jianmin Zheng, Jianfei Cai
- Abstract summary: In recent years, neural implicit surface reconstruction has emerged as a popular paradigm for multi-view 3D reconstruction.
Previous work ObjectSDF introduced a nice framework of object-composition neural implicit surfaces.
We propose a new framework called ObjectSDF++ to overcome the limitations of ObjectSDF.
- Score: 40.489487738598825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, neural implicit surface reconstruction has emerged as a
popular paradigm for multi-view 3D reconstruction. Unlike traditional
multi-view stereo approaches, the neural implicit surface-based methods
leverage neural networks to represent 3D scenes as signed distance functions
(SDFs). However, they tend to disregard the reconstruction of individual
objects within the scene, which limits their performance and practical
applications. To address this issue, previous work ObjectSDF introduced a nice
framework of object-composition neural implicit surfaces, which utilizes 2D
instance masks to supervise individual object SDFs. In this paper, we propose a
new framework called ObjectSDF++ to overcome the limitations of ObjectSDF.
First, in contrast to ObjectSDF whose performance is primarily restricted by
its converted semantic field, the core component of our model is an
occlusion-aware object opacity rendering formulation that directly
volume-renders object opacity to be supervised with instance masks. Second, we
design a novel regularization term for object distinction, which can
effectively mitigate the issue that ObjectSDF may result in unexpected
reconstruction in invisible regions due to the lack of constraint to prevent
collisions. Our extensive experiments demonstrate that our novel framework not
only produces superior object reconstruction results but also significantly
improves the quality of scene reconstruction. Code and more resources can be
found in \url{https://qianyiwu.github.io/objectsdf++}
Related papers
- High-Fidelity Mask-free Neural Surface Reconstruction for Virtual Reality [6.987660269386849]
Hi-NeuS is a novel rendering-based framework for neural implicit surface reconstruction.
Our approach has been validated through NeuS and its variant Neuralangelo.
arXiv Detail & Related papers (2024-09-20T02:07:49Z) - NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction [63.85586195085141]
Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction.
We introduce NeuRodin, a novel two-stage neural surface reconstruction framework.
NeuRodin achieves high-fidelity surface reconstruction and retains the flexible optimization characteristics of density-based methods.
arXiv Detail & Related papers (2024-08-19T17:36:35Z) - Iterative Superquadric Recomposition of 3D Objects from Multiple Views [77.53142165205283]
We propose a framework, ISCO, to recompose an object using 3D superquadrics as semantic parts directly from 2D views.
Our framework iteratively adds new superquadrics wherever the reconstruction error is high.
It provides consistently more accurate 3D reconstructions, even from images in the wild.
arXiv Detail & Related papers (2023-09-05T10:21:37Z) - Looking Through the Glass: Neural Surface Reconstruction Against High
Specular Reflections [72.45512144682554]
We present a novel surface reconstruction framework, NeuS-HSR, based on implicit neural rendering.
In NeuS-HSR, the object surface is parameterized as an implicit signed distance function.
We show that NeuS-HSR outperforms state-of-the-art approaches for accurate and robust target surface reconstruction against HSR.
arXiv Detail & Related papers (2023-04-18T02:34:58Z) - Learning a Room with the Occ-SDF Hybrid: Signed Distance Function
Mingled with Occupancy Aids Scene Representation [46.635542063913185]
Implicit neural rendering, which uses signed distance function representation with geometric priors, has led to impressive progress in the surface reconstruction of large-scale scenes.
We conduct experiments to identify limitations of the original color rendering loss and priors-embedded SDF scene representation.
We propose a feature-based color rendering loss that utilizes non-zero feature values to bring back optimization signals.
arXiv Detail & Related papers (2023-03-16T08:34:02Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Object-Compositional Neural Implicit Surfaces [45.274466719163925]
The neural implicit representation has shown its effectiveness in novel view synthesis and high-quality 3D reconstruction from multi-view images.
This paper proposes a novel framework, ObjectSDF, to build an object-compositional neural implicit representation with high fidelity in 3D reconstruction and object representation.
arXiv Detail & Related papers (2022-07-20T06:38:04Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Reconstruct, Rasterize and Backprop: Dense shape and pose estimation
from a single image [14.9851111159799]
This paper presents a new system to obtain dense object reconstructions along with 6-DoF poses from a single image.
We leverage recent advances in differentiable rendering (in particular, robotics) to close the loop with 3D reconstruction in camera frame.
arXiv Detail & Related papers (2020-04-25T20:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.