Amplitude Mean of Functional Data on $\mathbb{S}^2$
- URL: http://arxiv.org/abs/2107.13721v2
- Date: Fri, 30 Jul 2021 18:11:05 GMT
- Title: Amplitude Mean of Functional Data on $\mathbb{S}^2$
- Authors: Zhengwu Zhang and Bayan Saparbayeva
- Abstract summary: Mainfold-valued functional data analysis (FDA) recently becomes an active area of research motivated by the raising availability of trajectories or longitudinal data.
In this paper, we study the amplitude part of manifold-valued functions on $mathbbS2$, which is invariant to random time warping.
We develop a set of efficient and accurate tools for temporal alignment of functions, geodesic and sample mean calculation.
- Score: 5.584060970507506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mainfold-valued functional data analysis (FDA) recently becomes an active
area of research motivated by the raising availability of trajectories or
longitudinal data observed on non-linear manifolds. The challenges of analyzing
such data comes from many aspects, including infinite dimensionality and
nonlinearity, as well as time domain or phase variability. In this paper, we
study the amplitude part of manifold-valued functions on $\mathbb{S}^2$, which
is invariant to random time warping or re-parameterization of the function.
Utilizing the nice geometry of $\mathbb{S}^2$, we develop a set of efficient
and accurate tools for temporal alignment of functions, geodesic and sample
mean calculation. At the heart of these tools, they rely on gradient descent
algorithms with carefully derived gradients. We show the advantages of these
newly developed tools over its competitors with extensive simulations and real
data, and demonstrate the importance of considering the amplitude part of
functions instead of mixing it with phase variability in mainfold-valued FDA.
Related papers
- On Single Index Models beyond Gaussian Data [45.875461749455994]
Sparse high-dimensional functions have arisen as a rich framework to study the behavior of gradient-descent methods.
In this work, we explore extensions of this picture beyond the Gaussian setting where both stability or symmetry might be violated.
Our main results establish that Gradient Descent can efficiently recover the unknown direction $theta*$ in the high-dimensional regime.
arXiv Detail & Related papers (2023-07-28T20:52:22Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - A Functional approach for Two Way Dimension Reduction in Time Series [13.767812547998735]
We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder.
Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed.
arXiv Detail & Related papers (2023-01-01T06:09:15Z) - Shape And Structure Preserving Differential Privacy [70.08490462870144]
We show how the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
We also show how using the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
arXiv Detail & Related papers (2022-09-21T18:14:38Z) - Functional Nonlinear Learning [0.0]
We propose a functional nonlinear learning (FunNoL) method to represent multivariate functional data in a lower-dimensional feature space.
We show that FunNoL provides satisfactory curve classification and reconstruction regardless of data sparsity.
arXiv Detail & Related papers (2022-06-22T23:47:45Z) - Efficient Multidimensional Functional Data Analysis Using Marginal
Product Basis Systems [2.4554686192257424]
We propose a framework for learning continuous representations from a sample of multidimensional functional data.
We show that the resulting estimation problem can be solved efficiently by the tensor decomposition.
We conclude with a real data application in neuroimaging.
arXiv Detail & Related papers (2021-07-30T16:02:15Z) - Multiscale regression on unknown manifolds [13.752772802705978]
We construct low-dimensional coordinates on $mathcalM$ at multiple scales and perform multiscale regression by local fitting.
We analyze the generalization error of our method by proving finite sample bounds in high probability on rich classes of priors.
Our algorithm has quasilinear complexity in the sample size, with constants linear in $D$ and exponential in $d$.
arXiv Detail & Related papers (2021-01-13T15:14:31Z) - Piecewise Linear Regression via a Difference of Convex Functions [50.89452535187813]
We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.
We empirically validate the method, showing it to be practically implementable, and to have comparable performance to existing regression/classification methods on real-world datasets.
arXiv Detail & Related papers (2020-07-05T18:58:47Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z) - Improved guarantees and a multiple-descent curve for Column Subset
Selection and the Nystr\"om method [76.73096213472897]
We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees.
Our approach leads to significantly better bounds for datasets with known rates of singular value decay.
We show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.
arXiv Detail & Related papers (2020-02-21T00:43:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.