A One Stop 3D Target Reconstruction and multilevel Segmentation Method
- URL: http://arxiv.org/abs/2308.06974v1
- Date: Mon, 14 Aug 2023 07:12:31 GMT
- Title: A One Stop 3D Target Reconstruction and multilevel Segmentation Method
- Authors: Jiexiong Xu, Weikun Zhao, Zhiyan Tang and Xiangchao Gan
- Abstract summary: We propose an open-source one stop 3D target reconstruction and multilevel segmentation framework (OSTRA)
OSTRA performs segmentation on 2D images, tracks multiple instances with segmentation labels in the image sequence, and then reconstructs labelled 3D objects or multiple parts with Multi-View Stereo (MVS) or RGBD-based 3D reconstruction methods.
Our method opens up a new avenue for reconstructing 3D targets embedded with rich multi-scale segmentation information in complex scenes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D object reconstruction and multilevel segmentation are fundamental to
computer vision research. Existing algorithms usually perform 3D scene
reconstruction and target objects segmentation independently, and the
performance is not fully guaranteed due to the challenge of the 3D
segmentation. Here we propose an open-source one stop 3D target reconstruction
and multilevel segmentation framework (OSTRA), which performs segmentation on
2D images, tracks multiple instances with segmentation labels in the image
sequence, and then reconstructs labelled 3D objects or multiple parts with
Multi-View Stereo (MVS) or RGBD-based 3D reconstruction methods. We extend
object tracking and 3D reconstruction algorithms to support continuous
segmentation labels to leverage the advances in the 2D image segmentation,
especially the Segment-Anything Model (SAM) which uses the pretrained neural
network without additional training for new scenes, for 3D object segmentation.
OSTRA supports most popular 3D object models including point cloud, mesh and
voxel, and achieves high performance for semantic segmentation, instance
segmentation and part segmentation on several 3D datasets. It even surpasses
the manual segmentation in scenes with complex structures and occlusions. Our
method opens up a new avenue for reconstructing 3D targets embedded with rich
multi-scale segmentation information in complex scenes. OSTRA is available from
https://github.com/ganlab/OSTRA.
Related papers
- SAMPart3D: Segment Any Part in 3D Objects [23.97392239910013]
3D part segmentation is a crucial and challenging task in 3D perception, playing a vital role in applications such as robotics, 3D generation, and 3D editing.
Recent methods harness the powerful Vision Language Models (VLMs) for 2D-to-3D knowledge distillation, achieving zero-shot 3D part segmentation.
In this work, we introduce SAMPart3D, a scalable zero-shot 3D part segmentation framework that segments any 3D object into semantic parts at multiple granularities.
arXiv Detail & Related papers (2024-11-11T17:59:10Z) - 3D-GRES: Generalized 3D Referring Expression Segmentation [77.10044505645064]
3D Referring Expression (3D-RES) is dedicated to segmenting a specific instance within a 3D space based on a natural language description.
Generalized 3D Referring Expression (3D-GRES) extends the capability to segment any number of instances based on natural language instructions.
arXiv Detail & Related papers (2024-07-30T08:59:05Z) - MeshSegmenter: Zero-Shot Mesh Semantic Segmentation via Texture Synthesis [27.703204488877038]
MeshSegmenter is a framework designed for zero-shot 3D semantic segmentation.
It delivers accurate 3D segmentation across diverse meshes and segment descriptions.
arXiv Detail & Related papers (2024-07-18T16:50:59Z) - 3x2: 3D Object Part Segmentation by 2D Semantic Correspondences [33.99493183183571]
We propose to leverage a few annotated 3D shapes or richly annotated 2D datasets to perform 3D object part segmentation.
We present our novel approach, termed 3-By-2 that achieves SOTA performance on different benchmarks with various granularity levels.
arXiv Detail & Related papers (2024-07-12T19:08:00Z) - Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - SAI3D: Segment Any Instance in 3D Scenes [68.57002591841034]
We introduce SAI3D, a novel zero-shot 3D instance segmentation approach.
Our method partitions a 3D scene into geometric primitives, which are then progressively merged into 3D instance segmentations.
Empirical evaluations on ScanNet, Matterport3D and the more challenging ScanNet++ datasets demonstrate the superiority of our approach.
arXiv Detail & Related papers (2023-12-17T09:05:47Z) - SAM-guided Graph Cut for 3D Instance Segmentation [60.75119991853605]
This paper addresses the challenge of 3D instance segmentation by simultaneously leveraging 3D geometric and multi-view image information.
We introduce a novel 3D-to-2D query framework to effectively exploit 2D segmentation models for 3D instance segmentation.
Our method achieves robust segmentation performance and can generalize across different types of scenes.
arXiv Detail & Related papers (2023-12-13T18:59:58Z) - DatasetNeRF: Efficient 3D-aware Data Factory with Generative Radiance Fields [68.94868475824575]
This paper introduces a novel approach capable of generating infinite, high-quality 3D-consistent 2D annotations alongside 3D point cloud segmentations.
We leverage the strong semantic prior within a 3D generative model to train a semantic decoder.
Once trained, the decoder efficiently generalizes across the latent space, enabling the generation of infinite data.
arXiv Detail & Related papers (2023-11-18T21:58:28Z) - ONeRF: Unsupervised 3D Object Segmentation from Multiple Views [59.445957699136564]
ONeRF is a method that automatically segments and reconstructs object instances in 3D from multi-view RGB images without any additional manual annotations.
The segmented 3D objects are represented using separate Neural Radiance Fields (NeRFs) which allow for various 3D scene editing and novel view rendering.
arXiv Detail & Related papers (2022-11-22T06:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.