Neural Deformation Graphs for Globally-consistent Non-rigid
Reconstruction
- URL: http://arxiv.org/abs/2012.01451v1
- Date: Wed, 2 Dec 2020 19:00:13 GMT
- Title: Neural Deformation Graphs for Globally-consistent Non-rigid
Reconstruction
- Authors: Alja\v{z} Bo\v{z}i\v{c}, Pablo Palafox, Michael Zollh\"ofer, Justus
Thies, Angela Dai, Matthias Nie{\ss}ner
- Abstract summary: We introduce Neural Deformation Graphs for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.
Our method globally optimize this neural graph on a given sequence of depth camera observations of a non-rigidly moving object.
Our experiments demonstrate that our Neural Deformation Graphs outperform state-of-the-art non-rigid reconstruction approaches both qualitatively and quantitatively.
- Score: 25.047402917282344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Neural Deformation Graphs for globally-consistent deformation
tracking and 3D reconstruction of non-rigid objects. Specifically, we
implicitly model a deformation graph via a deep neural network. This neural
deformation graph does not rely on any object-specific structure and, thus, can
be applied to general non-rigid deformation tracking. Our method globally
optimizes this neural graph on a given sequence of depth camera observations of
a non-rigidly moving object. Based on explicit viewpoint consistency as well as
inter-frame graph and surface consistency constraints, the underlying network
is trained in a self-supervised fashion. We additionally optimize for the
geometry of the object with an implicit deformable multi-MLP shape
representation. Our approach does not assume sequential input data, thus
enabling robust tracking of fast motions or even temporally disconnected
recordings. Our experiments demonstrate that our Neural Deformation Graphs
outperform state-of-the-art non-rigid reconstruction approaches both
qualitatively and quantitatively, with 64% improved reconstruction and 62%
improved deformation tracking performance.
Related papers
- 4Deform: Neural Surface Deformation for Robust Shape Interpolation [47.47045870313048]
We develop a new approach to generate realistic intermediate shapes between non-rigidly deformed shapes in unstructured data.
Our method learns a continuous velocity field in Euclidean space and does not require intermediate-shape supervision during training.
For the first time, our method enables new applications like 4D Kinect sequence upsampling and real-world high-resolution mesh deformation.
arXiv Detail & Related papers (2025-02-27T15:47:49Z) - How Curvature Enhance the Adaptation Power of Framelet GCNs [27.831929635701886]
Graph neural network (GNN) has been demonstrated powerful in modeling graph-structured data.
This paper introduces a new approach to enhance GNN by discrete graph Ricci curvature.
We show that our curvature-based GNN model outperforms the state-of-the-art baselines in both homophily and heterophily graph datasets.
arXiv Detail & Related papers (2023-07-19T06:05:33Z) - Dynamic Graph Representation Learning via Edge Temporal States Modeling and Structure-reinforced Transformer [5.093187534912688]
We introduce the Recurrent Structure-reinforced Graph Transformer (RSGT), a novel framework for dynamic graph representation learning.
RSGT captures temporal node representations encoding both graph topology and evolving dynamics through a recurrent learning paradigm.
We show RSGT's superior performance in discrete dynamic graph representation learning, consistently outperforming existing methods in dynamic link prediction tasks.
arXiv Detail & Related papers (2023-04-20T04:12:50Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Leveraging Equivariant Features for Absolute Pose Regression [9.30597356471664]
We show that a translation and rotation equivariant Convolutional Neural Network directly induces representations of camera motions into the feature space.
We then show that this geometric property allows for implicitly augmenting the training data under a whole group of image plane-preserving transformations.
arXiv Detail & Related papers (2022-04-05T12:44:20Z) - Graph-Guided Deformation for Point Cloud Completion [35.10606375236494]
We propose a Graph-Guided Deformation Network, which respectively regards the input data and intermediate generation as controlling and supporting points.
Our key insight is to simulate the least square Laplacian deformation process via mesh deformation methods, which brings adaptivity for modeling variation in geometry details.
We are the first to refine the point cloud completion task by mimicing traditional graphics algorithms with GCN-guided deformation.
arXiv Detail & Related papers (2021-11-11T12:55:26Z) - Orthogonal Graph Neural Networks [53.466187667936026]
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
stacking more convolutional layers significantly decreases the performance of GNNs.
We propose a novel Ortho-GConv, which could generally augment the existing GNN backbones to stabilize the model training and improve the model's generalization performance.
arXiv Detail & Related papers (2021-09-23T12:39:01Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Stochastic Graph Recurrent Neural Network [6.656993023468793]
We propose SGRNN, a novel neural architecture that applies latent variables to simultaneously capture evolution in node attributes and topology.
Specifically, deterministic states are separated from states in the iterative process to suppress mutual interference.
Experiments on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2020-09-01T16:14:30Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.