Deep Active Latent Surfaces for Medical Geometries
- URL: http://arxiv.org/abs/2206.10241v1
- Date: Tue, 21 Jun 2022 10:33:32 GMT
- Title: Deep Active Latent Surfaces for Medical Geometries
- Authors: Patrick M. Jensen, Udaranga Wickramasinghe, Anders B. Dahl, Pascal
Fua, Vedrana A. Dahl
- Abstract summary: Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data.
In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex.
For inference, the latent vectors are updated independently while imposing spatial regularization constraints.
We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks.
- Score: 51.82897666576424
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Shape priors have long been known to be effective when reconstructing 3D
shapes from noisy or incomplete data. When using a deep-learning based shape
representation, this often involves learning a latent representation, which can
be either in the form of a single global vector or of multiple local ones. The
latter allows more flexibility but is prone to overfitting. In this paper, we
advocate a hybrid approach representing shapes in terms of 3D meshes with a
separate latent vector at each vertex. During training the latent vectors are
constrained to have the same value, which avoids overfitting. For inference,
the latent vectors are updated independently while imposing spatial
regularization constraints. We show that this gives us both flexibility and
generalization capabilities, which we demonstrate on several medical image
processing tasks.
Related papers
- A Scalable Combinatorial Solver for Elastic Geometrically Consistent 3D
Shape Matching [69.14632473279651]
We present a scalable algorithm for globally optimizing over the space of geometrically consistent mappings between 3D shapes.
We propose a novel primal coupled with a Lagrange dual problem that is several orders of magnitudes faster than previous solvers.
arXiv Detail & Related papers (2022-04-27T09:47:47Z) - ShapeFormer: Transformer-based Shape Completion via Sparse
Representation [41.33457875133559]
We present ShapeFormer, a network that produces a distribution of object completions conditioned on incomplete, and possibly noisy, point clouds.
The resultant distribution can then be sampled to generate likely completions, each exhibiting plausible shape details while being faithful to the input.
arXiv Detail & Related papers (2022-01-25T13:58:30Z) - A-SDF: Learning Disentangled Signed Distance Functions for Articulated
Shape Representation [62.517760545209065]
We introduce Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space.
We demonstrate our model generalize well to out-of-distribution and unseen data, e.g., partial point clouds and real-world depth images.
arXiv Detail & Related papers (2021-04-15T17:53:54Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z) - Gram Regularization for Multi-view 3D Shape Retrieval [3.655021726150368]
We propose a novel regularization term called Gram regularization.
By forcing the variance between weight kernels to be large, the regularizer can help to extract discriminative features.
The proposed Gram regularization is data independent and can converge stably and quickly without bells and whistles.
arXiv Detail & Related papers (2020-11-16T05:37:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.