CGPart: A Part Segmentation Dataset Based on 3D Computer Graphics Models
- URL: http://arxiv.org/abs/2103.14098v1
- Date: Thu, 25 Mar 2021 19:34:21 GMT
- Title: CGPart: A Part Segmentation Dataset Based on 3D Computer Graphics Models
- Authors: Qing Liu, Adam Kortylewski, Zhishuai Zhang, Zizhang Li, Mengqi Guo,
Qihao Liu, Xiaoding Yuan, Jiteng Mu, Weichao Qiu, Alan Yuille
- Abstract summary: CGPart provides detailed annotations on 3D CAD models, synthetic images, and real test images.
CGPart includes $21$ 3D CAD models covering $5$ vehicle categories, each with detailed per-mesh part labeling.
We make $168,000$ synthetic images from these CAD models, each with automatically generated part segmentation ground-truth.
- Score: 19.691187561807475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Part segmentations provide a rich and detailed part-level description of
objects, but their annotation requires an enormous amount of work. In this
paper, we introduce CGPart, a comprehensive part segmentation dataset that
provides detailed annotations on 3D CAD models, synthetic images, and real test
images. CGPart includes $21$ 3D CAD models covering $5$ vehicle categories,
each with detailed per-mesh part labeling. The average number of parts per
category is $24$, which is larger than any existing datasets for part
segmentation on vehicle objects. By varying the rendering parameters, we make
$168,000$ synthetic images from these CAD models, each with automatically
generated part segmentation ground-truth. We also annotate part segmentations
on $200$ real images for evaluation purposes. To illustrate the value of
CGPart, we apply it to image part segmentation through unsupervised domain
adaptation (UDA). We evaluate several baseline methods by adapting
top-performing UDA algorithms from related tasks to part segmentation.
Moreover, we introduce a new method called Geometric-Matching Guided domain
adaptation (GMG), which leverages the spatial object structure to guide the
knowledge transfer from the synthetic to the real images. Experimental results
demonstrate the advantage of our new algorithm and reveal insights for future
improvement. We will release our data and code.
Related papers
- Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - SAM-guided Graph Cut for 3D Instance Segmentation [60.75119991853605]
This paper addresses the challenge of 3D instance segmentation by simultaneously leveraging 3D geometric and multi-view image information.
We introduce a novel 3D-to-2D query framework to effectively exploit 2D segmentation models for 3D instance segmentation.
Our method achieves robust segmentation performance and can generalize across different types of scenes.
arXiv Detail & Related papers (2023-12-13T18:59:58Z) - 3DCoMPaT$^{++}$: An improved Large-scale 3D Vision Dataset for
Compositional Recognition [53.97029821609132]
3DCoMPaT$++$ is a multimodal 2D/3D dataset with 160 million rendered views of more than 10 million stylized 3D shapes.
We introduce a new task, called Grounded CoMPaT Recognition (GCR), to collectively recognize and ground compositions of materials on parts of 3D objects.
arXiv Detail & Related papers (2023-10-27T22:01:43Z) - A One Stop 3D Target Reconstruction and multilevel Segmentation Method [0.0]
We propose an open-source one stop 3D target reconstruction and multilevel segmentation framework (OSTRA)
OSTRA performs segmentation on 2D images, tracks multiple instances with segmentation labels in the image sequence, and then reconstructs labelled 3D objects or multiple parts with Multi-View Stereo (MVS) or RGBD-based 3D reconstruction methods.
Our method opens up a new avenue for reconstructing 3D targets embedded with rich multi-scale segmentation information in complex scenes.
arXiv Detail & Related papers (2023-08-14T07:12:31Z) - Inferring and Leveraging Parts from Object Shape for Improving Semantic
Image Synthesis [64.05076727277431]
This paper presents to infer Parts from Object ShapE (iPOSE) and leverage it for improving semantic image synthesis.
We learn a PartNet for predicting the object part map with the guidance of pre-defined support part maps.
Experiments show that our iPOSE not only generates objects with rich part details, but also enables to control the image synthesis flexibly.
arXiv Detail & Related papers (2023-05-31T04:27:47Z) - Towards Open-World Segmentation of Parts [16.056921233445784]
We propose to explore a class-agnostic part segmentation task.
We argue that models trained without part classes can better localize parts and segment them on objects unseen in training.
We show notable and consistent gains by our approach, essentially a critical step towards open-world part segmentation.
arXiv Detail & Related papers (2023-05-26T10:34:58Z) - GAPartNet: Cross-Category Domain-Generalizable Object Perception and
Manipulation via Generalizable and Actionable Parts [28.922958261132475]
We learn cross-category skills via Generalizable and Actionable Parts (GAParts)
Based on GAPartNet, we investigate three cross-category tasks: part segmentation, part pose estimation, and part-based object manipulation.
Our method outperforms all existing methods by a large margin, no matter on seen or unseen categories.
arXiv Detail & Related papers (2022-11-10T00:30:22Z) - PartImageNet: A Large, High-Quality Dataset of Parts [16.730418538593703]
We propose PartImageNet, a high-quality dataset with part segmentation annotations.
PartImageNet is unique because it offers part-level annotations on a general set of classes with non-rigid, articulated objects.
It can be utilized in multiple vision tasks including but not limited to: Part Discovery, Few-shot Learning.
arXiv Detail & Related papers (2021-12-02T02:12:03Z) - 3D Compositional Zero-shot Learning with DeCompositional Consensus [102.7571947144639]
We argue that part knowledge should be composable beyond the observed object classes.
We present 3D Compositional Zero-shot Learning as a problem of part generalization from seen to unseen object classes.
arXiv Detail & Related papers (2021-11-29T16:34:53Z) - Learning Geometry-Disentangled Representation for Complementary
Understanding of 3D Object Point Cloud [50.56461318879761]
We propose Geometry-Disentangled Attention Network (GDANet) for 3D image processing.
GDANet disentangles point clouds into contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components.
Experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters.
arXiv Detail & Related papers (2020-12-20T13:35:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.