A functional approach for curve alignment and shape analysis
- URL: http://arxiv.org/abs/2503.05632v1
- Date: Fri, 07 Mar 2025 17:55:14 GMT
- Title: A functional approach for curve alignment and shape analysis
- Authors: Issam-Ali Moindjié, Cédric Beaulac, Marie-Hélène Descary,
- Abstract summary: The shape $tildemathbfX$ of a random planar curve $mathbfX$ is what remains after removing deformation effects such as scaling, rotation, and parametrization.<n>Previous studies have focused on analyzing $tildebf X$ through discrete observations of the curve.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The shape $\tilde{\mathbf{X}}$ of a random planar curve $\mathbf{X}$ is what remains after removing deformation effects such as scaling, rotation, translation, and parametrization. Previous studies in statistical shape analysis have focused on analyzing $\tilde{\bf X}$ through discrete observations of the curve ${\bf X}$. While this approach has some computational advantages, it overlooks the continuous nature of both ${\bf X}$ and its shape $\tilde{\bf X}$. It also ignores potential dependencies among the deformation variables and their effect on $\tilde{ \bf X}$, which may result in information loss and reduced interpretability. In this paper, we introduce a novel framework for analyzing $\bf X$ in the context of Functional Data Analysis (FDA). Basis expansion techniques are employed to derive analytic solutions for estimating the deformation variables such as rotation and reparametrization, thereby achieving shape alignment. The generative model of $\bf X$ is then investigated using a joint-principal component analysis approach. Numerical experiments on simulated data and the \textit{MPEG-7} database demonstrate that our new approach successfully identifies the deformation parameters and captures the underlying distribution of planar curves in situations where traditional FDA methods fail to do so.
Related papers
- Joint Learning in the Gaussian Single Index Model [6.3151583550712065]
We consider the problem of jointly learning a one-dimensional projection and a unidimensional function in high-dimensional Gaussian models.<n>Our analysis shows that convergence still occurs even when the initial direction is negatively correlated with the target.<n>On the practical side, we demonstrate that such joint learning can be effectively implemented using a Reproducing Hilbert Kernel Space adapted to the structure of the problem.
arXiv Detail & Related papers (2025-05-27T15:30:34Z) - Outsourced diffusion sampling: Efficient posterior inference in latent spaces of generative models [65.71506381302815]
We propose amortize the cost of sampling from a posterior distribution of the form $p(mathbfxmidmathbfy) propto p_theta(mathbfx)$.<n>For many models and constraints of interest, the posterior in the noise space is smoother than the posterior in the data space, making it more amenable to such amortized inference.
arXiv Detail & Related papers (2025-02-10T19:49:54Z) - Probabilistic size-and-shape functional mixed models [6.424762079392286]
We show that it is possible to recover the size-and-shape of a square-integrable $mu$ under a functional mixed model.<n>A random object-level unitary transformation then captures size-and-shape emphpreserving deviations of $mu$ from an individual function.
arXiv Detail & Related papers (2024-11-27T15:00:33Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.<n>We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - Accurate and Scalable Stochastic Gaussian Process Regression via Learnable Coreset-based Variational Inference [8.077736581030264]
We introduce a novel inductive variational inference method for Gaussian process ($mathcalGP$) regression, by deriving a posterior over a learnable set of coresets.<n>Unlike former free-form variational families for inference, our coreset-based variational $mathcalGP$ (CVGP) is defined in terms of the $mathcalGP$ prior and the (weighted) data likelihood.
arXiv Detail & Related papers (2023-11-02T17:22:22Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Statistical Learning under Heterogeneous Distribution Shift [71.8393170225794]
Ground-truth predictor is additive $mathbbE[mathbfz mid mathbfx,mathbfy] = f_star(mathbfx) +g_star(mathbfy)$.
arXiv Detail & Related papers (2023-02-27T16:34:21Z) - Amplitude Mean of Functional Data on $\mathbb{S}^2$ [5.584060970507506]
Mainfold-valued functional data analysis (FDA) recently becomes an active area of research motivated by the raising availability of trajectories or longitudinal data.
In this paper, we study the amplitude part of manifold-valued functions on $mathbbS2$, which is invariant to random time warping.
We develop a set of efficient and accurate tools for temporal alignment of functions, geodesic and sample mean calculation.
arXiv Detail & Related papers (2021-07-29T03:11:26Z) - Factor Analysis, Probabilistic Principal Component Analysis, Variational
Inference, and Variational Autoencoder: Tutorial and Survey [5.967999555890417]
This tutorial and survey paper on factor analysis, probabilistic Principal Component Analysis (PCA), variational inference, and Variational Autoencoder (VAE)
They asssume that every data point is generated from or caused by a low-dimensional latent factor.
For their inference and generative behaviour, these models can also be used for generation of new data points in the data space.
arXiv Detail & Related papers (2021-01-04T01:29:09Z) - A finite sample analysis of the benign overfitting phenomenon for ridge
function estimation [0.0]
We propose a finite sample analysis of non-linear models of textitridge type.
We investigate the textitoverparametrised regime of the double descent phenomenon for both the textitestimation problem and the textitprediction problem.
arXiv Detail & Related papers (2020-07-25T08:40:29Z) - Piecewise Linear Regression via a Difference of Convex Functions [50.89452535187813]
We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.
We empirically validate the method, showing it to be practically implementable, and to have comparable performance to existing regression/classification methods on real-world datasets.
arXiv Detail & Related papers (2020-07-05T18:58:47Z) - Tight Nonparametric Convergence Rates for Stochastic Gradient Descent
under the Noiseless Linear Model [0.0]
We analyze the convergence of single-pass, fixed step-size gradient descent on the least-square risk under this model.
As a special case, we analyze an online algorithm for estimating a real function on the unit interval from the noiseless observation of its value at randomly sampled points.
arXiv Detail & Related papers (2020-06-15T08:25:50Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z) - Improved guarantees and a multiple-descent curve for Column Subset
Selection and the Nystr\"om method [76.73096213472897]
We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees.
Our approach leads to significantly better bounds for datasets with known rates of singular value decay.
We show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.
arXiv Detail & Related papers (2020-02-21T00:43:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.