Embedding Functional Data: Multidimensional Scaling and Manifold
Learning
- URL: http://arxiv.org/abs/2208.14540v1
- Date: Tue, 30 Aug 2022 21:12:31 GMT
- Title: Embedding Functional Data: Multidimensional Scaling and Manifold
Learning
- Authors: Ery Arias-Castro, Wanli Qiao
- Abstract summary: We focus on classical scaling and Isomap -- prototypical methods that have played important roles in these area.
In the process, we highlight the crucial role that the ambient metric plays.
- Score: 6.726255259929498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We adapt concepts, methodology, and theory originally developed in the areas
of multidimensional scaling and dimensionality reduction for multivariate data
to the functional setting. We focus on classical scaling and Isomap --
prototypical methods that have played important roles in these area -- and
showcase their use in the context of functional data analysis. In the process,
we highlight the crucial role that the ambient metric plays.
Related papers
- Prospector Heads: Generalized Feature Attribution for Large Models & Data [82.02696069543454]
We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods.
We demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data.
arXiv Detail & Related papers (2024-02-18T23:01:28Z) - On the estimation of the number of components in multivariate functional principal component analysis [0.0]
We present extensive simulations to investigate choosing the number of principal components to retain.
We show empirically that the conventional approach of using a percentage of variance explained threshold for each univariate functional feature may be unreliable.
arXiv Detail & Related papers (2023-11-08T09:05:42Z) - On the use of the Gram matrix for multivariate functional principal components analysis [0.0]
Dimension reduction is crucial in functional data analysis (FDA)
Existing approaches for functional principal component analysis usually involve the diagonalization of the covariance operator.
We propose to use the inner-product between the curves to estimate the eigenelements of multivariate and multidimensional functional datasets.
arXiv Detail & Related papers (2023-06-22T15:09:41Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z) - A geometric perspective on functional outlier detection [0.0]
We develop a conceptualization of functional outlier detection that is more widely applicable and realistic than previously proposed.
We show that simple manifold learning methods can be used to reliably infer and visualize the geometric structure of functional data sets.
Our experiments on synthetic and real data sets demonstrate that this approach leads to outlier detection performances at least on par with existing functional data-specific methods.
arXiv Detail & Related papers (2021-09-14T17:42:57Z) - Statistical Depth Meets Machine Learning: Kernel Mean Embeddings and
Depth in Functional Data Analysis [0.0]
Common $h$-depth and related statistical depths for functional data can be viewed as a kernel mean embedding.
This article highlights how the common $h$-depth and related statistical depths for functional data can be viewed as a kernel mean embedding.
arXiv Detail & Related papers (2021-05-26T18:22:33Z) - Transforming Feature Space to Interpret Machine Learning Models [91.62936410696409]
This contribution proposes a novel approach that interprets machine-learning models through the lens of feature space transformations.
It can be used to enhance unconditional as well as conditional post-hoc diagnostic tools.
A case study on remote-sensing landcover classification with 46 features is used to demonstrate the potential of the proposed approach.
arXiv Detail & Related papers (2021-04-09T10:48:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.