Unsupervised Shape Completion via Deep Prior in the Neural Tangent
Kernel Perspective
- URL: http://arxiv.org/abs/2104.09023v1
- Date: Mon, 19 Apr 2021 02:41:15 GMT
- Title: Unsupervised Shape Completion via Deep Prior in the Neural Tangent
Kernel Perspective
- Authors: Lei Chu, Hao Pan, Wenping Wang
- Abstract summary: We present a novel approach for completing and reconstructing 3D shapes from incomplete scanned data by using deep neural networks.
Rather than being trained on supervised completion tasks and applied on a testing shape, the network is optimized from scratch on the single testing shape.
The ability to complete missing data by an untrained neural network is usually referred to as the deep prior.
- Score: 40.39169145231995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel approach for completing and reconstructing 3D shapes from
incomplete scanned data by using deep neural networks. Rather than being
trained on supervised completion tasks and applied on a testing shape, the
network is optimized from scratch on the single testing shape, to fully adapt
to the shape and complete the missing data using contextual guidance from the
known regions. The ability to complete missing data by an untrained neural
network is usually referred to as the deep prior. In this paper, we interpret
the deep prior from a neural tangent kernel (NTK) perspective and show that the
completed shape patches by the trained CNN are naturally similar to existing
patches, as they are proximate in the kernel feature space induced by NTK. The
interpretation allows us to design more efficient network structures and
learning mechanisms for the shape completion and reconstruction task. Being
more aware of structural regularities than both traditional and other
unsupervised learning-based reconstruction methods, our approach completes
large missing regions with plausible shapes and complements supervised
learning-based methods that use database priors by requiring no extra training
data set and showing flexible adaptation to a particular shape instance.
Related papers
- When Deep Learning Meets Polyhedral Theory: A Survey [6.899761345257773]
In the past decade, deep became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural learning.
Meanwhile, the structure of neural networks converged back to simplerwise and linear functions.
arXiv Detail & Related papers (2023-04-29T11:46:53Z) - Contour Completion using Deep Structural Priors [1.7399355670260819]
We present a framework that completes disconnected contours and connects fragmented lines and curves.
In our framework, we propose a model that does not even need to know which regions of the contour are eliminated.
Our work builds a robust framework to achieve contour completion using deep structural priors and extensively investigate how such a model could be implemented.
arXiv Detail & Related papers (2023-02-09T05:45:33Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Localized Persistent Homologies for more Effective Deep Learning [60.78456721890412]
We introduce an approach that relies on a new filtration function to account for location during network training.
We demonstrate experimentally on 2D images of roads and 3D image stacks of neuronal processes that networks trained in this manner are better at recovering the topology of the curvilinear structures they extract.
arXiv Detail & Related papers (2021-10-12T19:28:39Z) - Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction [61.62191904755521]
3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
arXiv Detail & Related papers (2021-07-31T16:06:50Z) - KShapeNet: Riemannian network on Kendall shape space for Skeleton based
Action Recognition [7.183483982542308]
We propose a geometry aware deep learning approach for skeleton-based action recognition.
Skeletons are first modeled as trajectories on Kendall's shape space and then mapped to the linear tangent space.
The resulting structured data are then fed to a deep learning architecture, which includes a layer that optimize over rigid and non rigid transformations.
arXiv Detail & Related papers (2020-11-24T10:14:07Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - The Surprising Simplicity of the Early-Time Learning Dynamics of Neural
Networks [43.860358308049044]
In work, we show that these common perceptions can be completely false in the early phase of learning.
We argue that this surprising simplicity can persist in networks with more layers with convolutional architecture.
arXiv Detail & Related papers (2020-06-25T17:42:49Z) - Deep Manifold Prior [37.725563645899584]
We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent.
We show that surfaces generated this way are smooth, with limiting behavior characterized by Gaussian processes, and we mathematically derive such properties for fully-connected as well as convolutional networks.
arXiv Detail & Related papers (2020-04-08T20:47:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.