QuadricsNet: Learning Concise Representation for Geometric Primitives in
Point Clouds
- URL: http://arxiv.org/abs/2309.14211v1
- Date: Mon, 25 Sep 2023 15:18:08 GMT
- Title: QuadricsNet: Learning Concise Representation for Geometric Primitives in
Point Clouds
- Authors: Ji Wu, Huai Yu, Wen Yang, Gui-Song Xia
- Abstract summary: This paper presents a novel framework to learn a concise geometric primitive representation for 3D point clouds.
We employ quadrics to represent diverse primitives with only 10 parameters.
We propose the first end-to-end learning-based framework, namely QuadricsNet, to parse quadrics in point clouds.
- Score: 39.600071233251704
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel framework to learn a concise geometric primitive
representation for 3D point clouds. Different from representing each type of
primitive individually, we focus on the challenging problem of how to achieve a
concise and uniform representation robustly. We employ quadrics to represent
diverse primitives with only 10 parameters and propose the first end-to-end
learning-based framework, namely QuadricsNet, to parse quadrics in point
clouds. The relationships between quadrics mathematical formulation and
geometric attributes, including the type, scale and pose, are insightfully
integrated for effective supervision of QuaidricsNet. Besides, a novel
pattern-comprehensive dataset with quadrics segments and objects is collected
for training and evaluation. Experiments demonstrate the effectiveness of our
concise representation and the robustness of QuadricsNet. Our code is available
at \url{https://github.com/MichaelWu99-lab/QuadricsNet}
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - SweepNet: Unsupervised Learning Shape Abstraction via Neural Sweepers [18.9832388952668]
We introduce papername, a novel approach to shape abstraction through sweep surfaces.
We propose an effective parameterization for sweep surfaces, utilizing superellipses for profile representation and B-spline curves for the axis.
By introducing a differentiable neural sweeper and an encoder-decoder architecture, we demonstrate the ability to predict sweep surface representations without supervision.
arXiv Detail & Related papers (2024-07-08T18:18:17Z) - Generalized Few-Shot Point Cloud Segmentation Via Geometric Words [54.32239996417363]
Few-shot point cloud segmentation algorithms learn to adapt to new classes at the sacrifice of segmentation accuracy for the base classes.
We present the first attempt at a more practical paradigm of generalized few-shot point cloud segmentation.
We propose the geometric words to represent geometric components shared between the base and novel classes, and incorporate them into a novel geometric-aware semantic representation.
arXiv Detail & Related papers (2023-09-20T11:24:33Z) - Zero-Shot 3D Shape Correspondence [67.18775201037732]
We propose a novel zero-shot approach to computing correspondences between 3D shapes.
We exploit the exceptional reasoning capabilities of recent foundation models in language and vision.
Our approach produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
arXiv Detail & Related papers (2023-06-05T21:14:23Z) - Zero-shot point cloud segmentation by transferring geometric primitives [68.18710039217336]
We investigate zero-shot point cloud semantic segmentation, where the network is trained on seen objects and able to segment unseen objects.
We propose a novel framework to learn the geometric primitives shared in seen and unseen categories' objects and employ a fine-grained alignment between language and the learned geometric primitives.
arXiv Detail & Related papers (2022-10-18T15:06:54Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly
Representations [20.318695890515613]
We propose an autoencoder, TearingNet, which tackles the challenging task of representing point clouds using a fixed-length descriptor.
Our TearingNet is characterized by a proposed Tearing network module and a Folding network module interacting with each other iteratively.
Experimentation shows the superiority of our proposal in terms of reconstructing point clouds as well as generating more topology-friendly representations than benchmarks.
arXiv Detail & Related papers (2020-06-17T22:42:43Z) - ParSeNet: A Parametric Surface Fitting Network for 3D Point Clouds [40.52124782103019]
We propose a novel, end-to-end trainable, deep network called ParSeNet that decomposes a 3D point cloud into parametric surface patches.
ParSeNet is trained on a large-scale dataset of man-made 3D shapes and captures high-level semantic priors for shape decomposition.
arXiv Detail & Related papers (2020-03-26T22:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.