Controllable Shape Modeling with Neural Generalized Cylinder
- URL: http://arxiv.org/abs/2410.03675v1
- Date: Wed, 18 Sep 2024 21:48:33 GMT
- Title: Controllable Shape Modeling with Neural Generalized Cylinder
- Authors: Xiangyu Zhu, Zhiqin Chen, Ruizhen Hu, Xiaoguang Han,
- Abstract summary: We propose neural generalized cylinder (NGC) for explicit manipulation of neural signed distance field (NSDF)
By using the relative coordinates of a specialized GC with oval-shaped profiles, NSDF can be explicitly controlled via manipulation of the GC.
NGC could utilize the neural feature for shape blending by a simple neural feature.
- Score: 39.36613329005811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural shape representation, such as neural signed distance field (NSDF), becomes more and more popular in shape modeling as its ability to deal with complex topology and arbitrary resolution. Due to the implicit manner to use features for shape representation, manipulating the shapes faces inherent challenge of inconvenience, since the feature cannot be intuitively edited. In this work, we propose neural generalized cylinder (NGC) for explicit manipulation of NSDF, which is an extension of traditional generalized cylinder (GC). Specifically, we define a central curve first and assign neural features along the curve to represent the profiles. Then NSDF is defined on the relative coordinates of a specialized GC with oval-shaped profiles. By using the relative coordinates, NSDF can be explicitly controlled via manipulation of the GC. To this end, we apply NGC to many non-rigid deformation tasks like complex curved deformation, local scaling and twisting for shapes. The comparison on shape deformation with other methods proves the effectiveness and efficiency of NGC. Furthermore, NGC could utilize the neural feature for shape blending by a simple neural feature interpolation.
Related papers
- Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks [4.424836140281846]
We present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input.
Our method maintains the polygonal mesh format throughout the inpainting process.
We demonstrate that our method outperforms traditional dataset-independent approaches.
arXiv Detail & Related papers (2023-05-01T02:51:38Z) - Lattice Convolutional Networks for Learning Ground States of Quantum
Many-Body Systems [33.82764380485598]
We propose lattice convolutions in which a set of proposed operations are used to convert non-square lattices into grid-like augmented lattices.
Based on the proposed lattice convolutions, we design lattice convolutional networks (LCN) that use self-gating and attention mechanisms.
arXiv Detail & Related papers (2022-06-15T08:24:37Z) - CaDeX: Learning Canonical Deformation Coordinate Space for Dynamic
Surface Representation via Neural Homeomorphism [46.234728261236015]
We introduce Canonical Deformation Coordinate Space (CaDeX), a unified representation of both shape and nonrigid motion.
Our novel deformation representation and its implementation are simple, efficient, and guarantee cycle consistency.
We demonstrate state-of-the-art performance in modelling a wide range of deformable objects.
arXiv Detail & Related papers (2022-03-30T17:59:23Z) - DeepMLS: Geometry-Aware Control Point Deformation [76.51312491336343]
We introduce DeepMLS, a space-based deformation technique, guided by a set of displaced control points.
We leverage the power of neural networks to inject the underlying shape geometry into the deformation parameters.
Our technique facilitates intuitive piecewise smooth deformations, which are well suited for manufactured objects.
arXiv Detail & Related papers (2022-01-05T23:55:34Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z) - ContourCNN: convolutional neural network for contour data classification [0.0]
This paper proposes a novel Convolutional Neural Network model for contour data analysis (ContourCNN) and shape classification.
We employ circular convolution layers to handle the cyclical property of the contour representation.
To address information sparsity, we introduce priority pooling layers that select features based on their magnitudes.
arXiv Detail & Related papers (2020-09-20T11:56:47Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z) - Convex Shape Prior for Deep Neural Convolution Network based Eye Fundus
Images Segmentation [6.163107242394357]
We propose a technique which can be easily integrated into the commonly used DCNNs for image segmentation.
Our method is based on the dual representation of the sigmoid activation function in DCNNs.
We show that our method is efficient and outperforms the classical DCNN segmentation methods.
arXiv Detail & Related papers (2020-05-15T11:36:04Z) - PointGMM: a Neural GMM Network for Point Clouds [83.9404865744028]
Point clouds are popular representation for 3D shapes, but encode a particular sampling without accounting for shape priors or non-local information.
We present PointGMM, a neural network that learns to generate hGMMs which are characteristic of the shape class.
We show that as a generative model, PointGMM learns a meaningful latent space which enables generating consistents between existing shapes.
arXiv Detail & Related papers (2020-03-30T10:34:59Z) - Cylindrical Convolutional Networks for Joint Object Detection and
Viewpoint Estimation [76.21696417873311]
We introduce a learnable module, cylindrical convolutional networks (CCNs), that exploit cylindrical representation of a convolutional kernel defined in the 3D space.
CCNs extract a view-specific feature through a view-specific convolutional kernel to predict object category scores at each viewpoint.
Our experiments demonstrate the effectiveness of the cylindrical convolutional networks on joint object detection and viewpoint estimation.
arXiv Detail & Related papers (2020-03-25T10:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.