SegNeRF: 3D Part Segmentation with Neural Radiance Fields
- URL: http://arxiv.org/abs/2211.11215v2
- Date: Tue, 22 Nov 2022 06:11:04 GMT
- Title: SegNeRF: 3D Part Segmentation with Neural Radiance Fields
- Authors: Jesus Zarzar, Sara Rojas, Silvio Giancola, and Bernard Ghanem
- Abstract summary: SegNeRF is a neural field representation that integrates a semantic field along with the usual radiance field.
SegNeRF is capable of simultaneously predicting geometry, appearance, and semantic information from posed images, even for unseen objects.
SegNeRF is able to generate an explicit 3D model from a single image of an object taken in the wild, with its corresponding part segmentation.
- Score: 63.12841224024818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Neural Radiance Fields (NeRF) boast impressive
performances for generative tasks such as novel view synthesis and 3D
reconstruction. Methods based on neural radiance fields are able to represent
the 3D world implicitly by relying exclusively on posed images. Yet, they have
seldom been explored in the realm of discriminative tasks such as 3D part
segmentation. In this work, we attempt to bridge that gap by proposing SegNeRF:
a neural field representation that integrates a semantic field along with the
usual radiance field. SegNeRF inherits from previous works the ability to
perform novel view synthesis and 3D reconstruction, and enables 3D part
segmentation from a few images. Our extensive experiments on PartNet show that
SegNeRF is capable of simultaneously predicting geometry, appearance, and
semantic information from posed images, even for unseen objects. The predicted
semantic fields allow SegNeRF to achieve an average mIoU of $\textbf{30.30%}$
for 2D novel view segmentation, and $\textbf{37.46%}$ for 3D part segmentation,
boasting competitive performance against point-based methods by using only a
few posed images. Additionally, SegNeRF is able to generate an explicit 3D
model from a single image of an object taken in the wild, with its
corresponding part segmentation.
Related papers
- GSNeRF: Generalizable Semantic Neural Radiance Fields with Enhanced 3D
Scene Understanding [30.951440204237166]
We introduce a Generalizable Semantic Neural Radiance Field (GSNeRF), which takes image semantics into the synthesis process.
Our GSNeRF is composed of two stages: Semantic Geo-Reasoning and Depth-Guided Visual rendering.
arXiv Detail & Related papers (2024-03-06T10:55:50Z) - Geometry Aware Field-to-field Transformations for 3D Semantic
Segmentation [48.307734886370014]
We present a novel approach to perform 3D semantic segmentation solely from 2D supervision by leveraging Neural Radiance Fields (NeRFs)
By extracting features along a surface point cloud, we achieve a compact representation of the scene which is sample-efficient and conducive to 3D reasoning.
arXiv Detail & Related papers (2023-10-08T11:48:19Z) - Instance Neural Radiance Field [62.152611795824185]
This paper presents one of the first learning-based NeRF 3D instance segmentation pipelines, dubbed as Instance Neural Radiance Field.
We adopt a 3D proposal-based mask prediction network on the sampled volumetric features from NeRF.
Our method is also one of the first to achieve such results in pure inference.
arXiv Detail & Related papers (2023-04-10T05:49:24Z) - FeatureNeRF: Learning Generalizable NeRFs by Distilling Foundation
Models [21.523836478458524]
Recent works on generalizable NeRFs have shown promising results on novel view synthesis from single or few images.
We propose a novel framework named FeatureNeRF to learn generalizable NeRFs by distilling pre-trained vision models.
Our experiments demonstrate the effectiveness of FeatureNeRF as a generalizable 3D semantic feature extractor.
arXiv Detail & Related papers (2023-03-22T17:57:01Z) - Semantic 3D-aware Portrait Synthesis and Manipulation Based on
Compositional Neural Radiance Field [55.431697263581626]
We propose a Compositional Neural Radiance Field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation.
CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image.
Compared to state-of-the-art 3D-aware GAN methods, our approach enables fine-grained semantic region manipulation, while maintaining high-quality 3D-consistent synthesis.
arXiv Detail & Related papers (2023-02-03T07:17:46Z) - ONeRF: Unsupervised 3D Object Segmentation from Multiple Views [59.445957699136564]
ONeRF is a method that automatically segments and reconstructs object instances in 3D from multi-view RGB images without any additional manual annotations.
The segmented 3D objects are represented using separate Neural Radiance Fields (NeRFs) which allow for various 3D scene editing and novel view rendering.
arXiv Detail & Related papers (2022-11-22T06:19:37Z) - Unsupervised Multi-View Object Segmentation Using Radiance Field
Propagation [55.9577535403381]
We present a novel approach to segmenting objects in 3D during reconstruction given only unlabeled multi-view images of a scene.
The core of our method is a novel propagation strategy for individual objects' radiance fields with a bidirectional photometric loss.
To the best of our knowledge, RFP is the first unsupervised approach for tackling 3D scene object segmentation for neural radiance field (NeRF)
arXiv Detail & Related papers (2022-10-02T11:14:23Z) - PeRFception: Perception using Radiance Fields [72.99583614735545]
We create the first large-scale implicit representation datasets for perception tasks, called the PeRFception.
It shows a significant memory compression rate (96.4%) from the original dataset, while containing both 2D and 3D information in a unified form.
We construct the classification and segmentation models that directly take as input this implicit format and also propose a novel augmentation technique to avoid overfitting on backgrounds of images.
arXiv Detail & Related papers (2022-08-24T13:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.