MASH: Masked Anchored SpHerical Distances for 3D Shape Representation and Generation
- URL: http://arxiv.org/abs/2504.09149v2
- Date: Sun, 27 Apr 2025 09:33:53 GMT
- Title: MASH: Masked Anchored SpHerical Distances for 3D Shape Representation and Generation
- Authors: Changhao Li, Yu Xin, Xiaowei Zhou, Ariel Shamir, Hao Zhang, Ligang Liu, Ruizhen Hu,
- Abstract summary: Masked Anchored SpHerical Distances (MASH) is a novel multi-view and parametrized representation of 3D shapes.<n>MASH is versatile for multiple applications including surface reconstruction, shape generation, completion, and blending.
- Score: 55.88474970190769
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce Masked Anchored SpHerical Distances (MASH), a novel multi-view and parametrized representation of 3D shapes. Inspired by multi-view geometry and motivated by the importance of perceptual shape understanding for learning 3D shapes, MASH represents a 3D shape as a collection of observable local surface patches, each defined by a spherical distance function emanating from an anchor point. We further leverage the compactness of spherical harmonics to encode the MASH functions, combined with a generalized view cone with a parameterized base that masks the spatial extent of the spherical function to attain locality. We develop a differentiable optimization algorithm capable of converting any point cloud into a MASH representation accurately approximating ground-truth surfaces with arbitrary geometry and topology. Extensive experiments demonstrate that MASH is versatile for multiple applications including surface reconstruction, shape generation, completion, and blending, achieving superior performance thanks to its unique representation encompassing both implicit and explicit features.
Related papers
- Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation [10.250715657201363]
We introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video.
Our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.
arXiv Detail & Related papers (2024-10-09T10:41:08Z) - TetSphere Splatting: Representing High-Quality Geometry with Lagrangian Volumetric Meshes [47.47768820192874]
TetSphere splatting represents 3D shapes by deforming a collection of tetrahedral spheres.<n>It addresses common mesh issues such as irregular triangles, non-manifoldness, and floating artifacts.<n>It seamlessly integrates into generative modeling tasks, such as image-to-3D and text-to-3D generation.
arXiv Detail & Related papers (2024-05-30T17:35:49Z) - Learning Continuous Mesh Representation with Spherical Implicit Surface [3.8707695363745223]
We propose to learn a continuous representation for meshes with fixed topology.
SIS representation builds a bridge between discrete and continuous representation in 3D shapes.
arXiv Detail & Related papers (2023-01-11T20:00:17Z) - Sphere Face Model:A 3D Morphable Model with Hypersphere Manifold Latent
Space [14.597212159819403]
We propose a novel 3DMM for monocular face reconstruction, which can preserve both shape fidelity and identity consistency.
The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes.
It produces fidelity face shapes, and the shapes are consistent in challenging conditions in monocular face reconstruction.
arXiv Detail & Related papers (2021-12-04T04:28:53Z) - SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation [50.53931728235875]
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
arXiv Detail & Related papers (2021-08-10T06:49:45Z) - DualConv: Dual Mesh Convolutional Networks for Shape Correspondence [44.94765770516059]
Convolutional neural networks have been extremely successful for 2D images and are readily extended to handle 3D voxel data.
In this paper we explore how these networks can be extended to the dual face-based representation of triangular meshes.
Our experiments demonstrate that building additionally convolutional models that explicitly leverage the neighborhood size regularity of dual meshes enables learning shape representations that perform on par or better than previous approaches.
arXiv Detail & Related papers (2021-03-23T11:22:47Z) - Concentric Spherical GNN for 3D Representation Learning [53.45704095146161]
We propose a novel multi-resolution convolutional architecture for learning over concentric spherical feature maps.
Our hierarchical architecture is based on alternatively learning to incorporate both intra-sphere and inter-sphere information.
We demonstrate the effectiveness of our approach in improving state-of-the-art performance on 3D classification tasks with rotated data.
arXiv Detail & Related papers (2021-03-18T19:05:04Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.