Towards Efficient Time Stepping for Numerical Shape Correspondence
- URL: http://arxiv.org/abs/2312.13841v1
- Date: Thu, 21 Dec 2023 13:40:03 GMT
- Title: Towards Efficient Time Stepping for Numerical Shape Correspondence
- Authors: Alexander K\"ohler, Michael Breu{\ss}
- Abstract summary: Methods based on partial differential equations (PDEs) have been established, encompassing e.g. the classic heat kernel signature.
We consider here several time stepping schemes. The goal of this investigation is to assess, if one may identify a useful property of methods for time integration for the shape analysis context.
- Score: 55.2480439325792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The computation of correspondences between shapes is a principal task in
shape analysis. To this end, methods based on partial differential equations
(PDEs) have been established, encompassing e.g. the classic heat kernel
signature as well as numerical solution schemes for geometric PDEs. In this
work we focus on the latter approach.
We consider here several time stepping schemes. The goal of this
investigation is to assess, if one may identify a useful property of methods
for time integration for the shape analysis context. Thereby we investigate the
dependence on time step size, since the class of implicit schemes that are
useful candidates in this context should ideally yield an invariant behaviour
with respect to this parameter.
To this end we study integration of heat and wave equation on a manifold. In
order to facilitate this study, we propose an efficient, unified model order
reduction framework for these models. We show that specific $l_0$ stable
schemes are favourable for numerical shape analysis. We give an experimental
evaluation of the methods at hand of classical TOSCA data sets.
Related papers
- An Alternate View on Optimal Filtering in an RKHS [0.0]
Adaptive Filtering (KAF) are mathematically principled methods which search for a function in a Reproducing Kernel Space.
They are plagued by a linear relationship between number of training samples and model size, hampering their use on the very large data sets common in today's data saturated world.
We describe a novel view of optimal filtering which may provide a route towards solutions in a RKHS which do not necessarily have this linear growth in model size.
arXiv Detail & Related papers (2023-12-19T16:43:17Z) - Computing SHAP Efficiently Using Model Structure Information [3.6626323701161665]
We propose methods that compute SHAP exactly in time or even faster for SHAP definitions that satisfy our additivity and dummy assumptions.
For the first case, we demonstrate an additive property and a way to compute SHAP from the lower-order functional components.
For the second case, we derive formulas that can compute SHAP in time. Both methods yield exact SHAP results.
arXiv Detail & Related papers (2023-09-05T17:48:09Z) - PAGP: A physics-assisted Gaussian process framework with active learning
for forward and inverse problems of partial differential equations [12.826754199680474]
We introduce three different models: continuous time, discrete time and hybrid models.
The given physical information is integrated into Gaussian process model through our designed GP loss functions.
In the last part, a novel hybrid model combining the continuous and discrete time models is presented.
arXiv Detail & Related papers (2022-04-06T05:08:01Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - q-Paths: Generalizing the Geometric Annealing Path using Power Means [51.73925445218366]
We introduce $q$-paths, a family of paths which includes the geometric and arithmetic mixtures as special cases.
We show that small deviations away from the geometric path yield empirical gains for Bayesian inference.
arXiv Detail & Related papers (2021-07-01T21:09:06Z) - Bayesian Quadrature on Riemannian Data Manifolds [79.71142807798284]
A principled way to model nonlinear geometric structure inherent in data is provided.
However, these operations are typically computationally demanding.
In particular, we focus on Bayesian quadrature (BQ) to numerically compute integrals over normal laws.
We show that by leveraging both prior knowledge and an active exploration scheme, BQ significantly reduces the number of required evaluations.
arXiv Detail & Related papers (2021-02-12T17:38:04Z) - Identifying Latent Stochastic Differential Equations [29.103393300261587]
We present a method for learning latent differential equations (SDEs) from high-dimensional time series data.
The proposed method learns the mapping from ambient to latent space, and the underlying SDE coefficients, through a self-supervised learning approach.
We validate the method through several simulated video processing tasks, where the underlying SDE is known, and through real world datasets.
arXiv Detail & Related papers (2020-07-12T19:46:31Z) - Deep-learning of Parametric Partial Differential Equations from Sparse
and Noisy Data [2.4431531175170362]
In this work, a new framework, which combines neural network, genetic algorithm and adaptive methods, is put forward to address all of these challenges simultaneously.
A trained neural network is utilized to calculate derivatives and generate a large amount of meta-data, which solves the problem of sparse noisy data.
Next, genetic algorithm is utilized to discover the form of PDEs and corresponding coefficients with an incomplete candidate library.
A two-step adaptive method is introduced to discover parametric PDEs with spatially- or temporally-varying coefficients.
arXiv Detail & Related papers (2020-05-16T09:09:57Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.