MetaSDF: Meta-learning Signed Distance Functions
- URL: http://arxiv.org/abs/2006.09662v1
- Date: Wed, 17 Jun 2020 05:14:53 GMT
- Title: MetaSDF: Meta-learning Signed Distance Functions
- Authors: Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, Gordon
Wetzstein
- Abstract summary: Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
- Score: 85.81290552559817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural implicit shape representations are an emerging paradigm that offers
many potential benefits over conventional discrete representations, including
memory efficiency at a high spatial resolution. Generalizing across shapes with
such neural implicit representations amounts to learning priors over the
respective function space and enables geometry reconstruction from partial or
noisy observations. Existing generalization methods rely on conditioning a
neural network on a low-dimensional latent code that is either regressed by an
encoder or jointly optimized in the auto-decoder framework. Here, we formalize
learning of a shape space as a meta-learning problem and leverage
gradient-based meta-learning algorithms to solve this task. We demonstrate that
this approach performs on par with auto-decoder based approaches while being an
order of magnitude faster at test-time inference. We further demonstrate that
the proposed gradient-based method outperforms encoder-decoder based methods
that leverage pooling-based set encoders.
Related papers
- Generalizable Neural Fields as Partially Observed Neural Processes [16.202109517569145]
We propose a new paradigm that views the large-scale training of neural representations as a part of a partially-observed neural process framework.
We demonstrate that this approach outperforms both state-of-the-art gradient-based meta-learning approaches and hypernetwork approaches.
arXiv Detail & Related papers (2023-09-13T01:22:16Z) - Few 'Zero Level Set'-Shot Learning of Shape Signed Distance Functions in
Feature Space [6.675491069288519]
We explore a new idea for learning based shape reconstruction from a point cloud.
We use a convolutional encoder to build a feature space given the input point cloud.
An implicit decoder learns to predict signed distance values given points represented in this feature space.
arXiv Detail & Related papers (2022-07-09T00:14:39Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.