SweepNet: Unsupervised Learning Shape Abstraction via Neural Sweepers
- URL: http://arxiv.org/abs/2407.06305v1
- Date: Mon, 8 Jul 2024 18:18:17 GMT
- Title: SweepNet: Unsupervised Learning Shape Abstraction via Neural Sweepers
- Authors: Mingrui Zhao, Yizhi Wang, Fenggen Yu, Changqing Zou, Ali Mahdavi-Amiri,
- Abstract summary: We introduce papername, a novel approach to shape abstraction through sweep surfaces.
We propose an effective parameterization for sweep surfaces, utilizing superellipses for profile representation and B-spline curves for the axis.
By introducing a differentiable neural sweeper and an encoder-decoder architecture, we demonstrate the ability to predict sweep surface representations without supervision.
- Score: 18.9832388952668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Shape abstraction is an important task for simplifying complex geometric structures while retaining essential features. Sweep surfaces, commonly found in human-made objects, aid in this process by effectively capturing and representing object geometry, thereby facilitating abstraction. In this paper, we introduce \papername, a novel approach to shape abstraction through sweep surfaces. We propose an effective parameterization for sweep surfaces, utilizing superellipses for profile representation and B-spline curves for the axis. This compact representation, requiring as few as 14 float numbers, facilitates intuitive and interactive editing while preserving shape details effectively. Additionally, by introducing a differentiable neural sweeper and an encoder-decoder architecture, we demonstrate the ability to predict sweep surface representations without supervision. We show the superiority of our model through several quantitative and qualitative experiments throughout the paper. Our code is available at https://mingrui-zhao.github.io/SweepNet/
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - ShapeGrasp: Zero-Shot Task-Oriented Grasping with Large Language Models through Geometric Decomposition [8.654140442734354]
Task-oriented grasping of unfamiliar objects is a necessary skill for robots in dynamic in-home environments.
We present a novel zero-shot task-oriented grasping method leveraging a geometric decomposition of the target object into simple convex shapes.
Our approach employs minimal essential information - the object's name and the intended task - to facilitate zero-shot task-oriented grasping.
arXiv Detail & Related papers (2024-03-26T19:26:53Z) - Robust Shape Fitting for 3D Scene Abstraction [33.84212609361491]
In particular, we can describe man-made environments using volumetric primitives such as cuboids or cylinders.
We propose a robust estimator for primitive fitting, which meaningfully abstracts complex real-world environments using cuboids.
Results on the NYU Depth v2 dataset demonstrate that the proposed algorithm successfully abstracts cluttered real-world 3D scene layouts.
arXiv Detail & Related papers (2024-03-15T16:37:43Z) - QuadricsNet: Learning Concise Representation for Geometric Primitives in
Point Clouds [39.600071233251704]
This paper presents a novel framework to learn a concise geometric primitive representation for 3D point clouds.
We employ quadrics to represent diverse primitives with only 10 parameters.
We propose the first end-to-end learning-based framework, namely QuadricsNet, to parse quadrics in point clouds.
arXiv Detail & Related papers (2023-09-25T15:18:08Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Neural Convolutional Surfaces [59.172308741945336]
This work is concerned with a representation of shapes that disentangles fine, local and possibly repeating geometry, from global, coarse structures.
We show that this approach achieves better neural shape compression than the state of the art, as well as enabling manipulation and transfer of shape details.
arXiv Detail & Related papers (2022-04-05T15:40:11Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Unsupervised Learning for Cuboid Shape Abstraction via Joint
Segmentation from Point Clouds [8.156355030558172]
Representing complex 3D objects as simple geometric primitives, known as shape abstraction, is important for geometric modeling, structural analysis, and shape synthesis.
We propose an unsupervised shape abstraction method to map a point cloud into a compact cuboid representation.
arXiv Detail & Related papers (2021-06-07T09:15:16Z) - Learning Occupancy Function from Point Clouds for Surface Reconstruction [6.85316573653194]
Implicit function based surface reconstruction has been studied for a long time to recover 3D shapes from point clouds sampled from surfaces.
This paper proposes a novel method for learning occupancy functions from sparse point clouds and achieves better performance on challenging surface reconstruction tasks.
arXiv Detail & Related papers (2020-10-22T02:07:29Z) - PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape
Representations [75.42959184226702]
We present a new mid-level patch-based surface representation for object-agnostic training.
We show several applications of our new representation, including shape and partial point cloud completion.
arXiv Detail & Related papers (2020-08-04T15:34:46Z) - DualSDF: Semantic Shape Manipulation using a Two-Level Representation [54.62411904952258]
We propose DualSDF, a representation expressing shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape.
Our two-level model gives rise to a new shape manipulation technique in which a user can interactively manipulate the coarse proxy shape and see the changes instantly mirrored in the high-resolution shape.
arXiv Detail & Related papers (2020-04-06T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.