Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks
- URL: http://arxiv.org/abs/2305.00635v2
- Date: Tue, 16 Apr 2024 03:46:03 GMT
- Title: Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks
- Authors: Shota Hattori, Tatsuya Yatagawa, Yutaka Ohtake, Hiromasa Suzuki,
- Abstract summary: We present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input.
Our method maintains the polygonal mesh format throughout the inpainting process.
We demonstrate that our method outperforms traditional dataset-independent approaches.
- Score: 4.424836140281846
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input, without the need for any training datasets. Additionally, our method maintains the polygonal mesh format throughout the inpainting process without converting the shape format to an intermediate one, such as a voxel grid, a point cloud, or an implicit function, which are typically considered easier for deep neural networks to process. To achieve this goal, we introduce two graph convolutional networks (GCNs): single-resolution GCN (SGCN) and multi-resolution GCN (MGCN), both trained in a self-supervised manner. Our approach refines a watertight mesh obtained from the initial hole filling to generate a complete output mesh. Specifically, we train the GCNs to deform an oversmoothed version of the input mesh into the expected complete shape. The deformation is described by vertex displacements, and the GCNs are supervised to obtain accurate displacements at vertices in real holes. To this end, we specify several connected regions of the mesh as fake holes, thereby generating meshes with various sets of fake holes. The correct displacements of vertices are known in these fake holes, thus enabling training GCNs with loss functions that assess the accuracy of vertex displacements. We demonstrate that our method outperforms traditional dataset-independent approaches and exhibits greater robustness compared with other deep-learning-based methods for shapes that infrequently appear in shape datasets. Our code and test data are available at https://github.com/astaka-pe/SeMIGCN.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - PoNQ: a Neural QEM-based Mesh Representation [33.81124790808585]
We introduce a learnable mesh representation through a set of local 3D sample Points and their associated Normals and Quadric error metrics (QEM)
A global mesh is directly derived from PoNQ by efficiently leveraging the knowledge of the local quadric errors.
We demonstrate the efficacy of PoNQ through a learning-based mesh prediction from SDF grids.
arXiv Detail & Related papers (2024-03-19T16:15:08Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Graph-Guided Deformation for Point Cloud Completion [35.10606375236494]
We propose a Graph-Guided Deformation Network, which respectively regards the input data and intermediate generation as controlling and supporting points.
Our key insight is to simulate the least square Laplacian deformation process via mesh deformation methods, which brings adaptivity for modeling variation in geometry details.
We are the first to refine the point cloud completion task by mimicing traditional graphics algorithms with GCN-guided deformation.
arXiv Detail & Related papers (2021-11-11T12:55:26Z) - Mesh Draping: Parametrization-Free Neural Mesh Transfer [92.55503085245304]
Mesh Draping is a neural method for transferring existing mesh structure from one shape to another.
We show that by leveraging gradually increasing frequencies to guide the neural optimization, we are able to achieve stable and high quality mesh transfer.
arXiv Detail & Related papers (2021-10-11T17:24:52Z) - Deep Mesh Prior: Unsupervised Mesh Restoration using Graph Convolutional
Networks [0.0]
We propose a graph convolutional network on meshes to learn self-similarity.
The network takes a single incomplete mesh as input data and directly outputs the reconstructed mesh.
We demonstrate that our unsupervised method performs equally well or even better than the state-of-the-art methods using large-scale datasets.
arXiv Detail & Related papers (2021-07-02T07:21:10Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - GAMesh: Guided and Augmented Meshing for Deep Point Networks [4.599235672072547]
We present a new meshing algorithm called guided and augmented meshing, GAMesh, which uses a mesh prior to generate a surface for the output points of a point network.
By projecting the output points onto this prior, GAMesh ensures a surface with the same topology as the mesh prior but whose geometric fidelity is controlled by the point network.
arXiv Detail & Related papers (2020-10-19T18:23:53Z) - A Point-Cloud Deep Learning Framework for Prediction of Fluid Flow
Fields on Irregular Geometries [62.28265459308354]
Network learns end-to-end mapping between spatial positions and CFD quantities.
Incompress laminar steady flow past a cylinder with various shapes for its cross section is considered.
Network predicts the flow fields hundreds of times faster than our conventional CFD.
arXiv Detail & Related papers (2020-10-15T12:15:02Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.