A manifold learning approach for gesture identification from
micro-Doppler radar measurements
- URL: http://arxiv.org/abs/2110.01670v1
- Date: Mon, 4 Oct 2021 19:08:44 GMT
- Title: A manifold learning approach for gesture identification from
micro-Doppler radar measurements
- Authors: Eric Mason, Hrushikesh Mhaskar, Adam Guo
- Abstract summary: We present a kernel based approximation for manifold learning that does not require the knowledge of anything about the manifold, except its dimension.
We demonstrate the performance of our approach using a publicly available micro-Doppler data set.
- Score: 1.4610038284393163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A recent paper (Neural Networks, {\bf 132} (2020), 253-268) introduces a
straightforward and simple kernel based approximation for manifold learning
that does not require the knowledge of anything about the manifold, except for
its dimension. In this paper, we examine the pointwise error in approximation
using least squares optimization based on this kernel, in particular, how the
error depends upon the data characteristics and deteriorates as one goes away
from the training data. The theory is presented with an abstract localized
kernel, which can utilize any prior knowledge about the data being located on
an unknown sub-manifold of a known manifold.
We demonstrate the performance of our approach using a publicly available
micro-Doppler data set investigating the use of different pre-processing
measures, kernels, and manifold dimension. Specifically, it is shown that the
Gaussian kernel introduced in the above mentioned paper leads to a
near-competitive performance to deep neural networks, and offers significant
improvements in speed and memory requirements. Similarly, a kernel based on
treating the feature space as a submanifold of the Grassman manifold
outperforms conventional hand-crafted features. To demonstrate the fact that
our methods are agnostic to the domain knowledge, we examine the classification
problem in a simple video data set.
Related papers
- Efficient Prior Calibration From Indirect Data [5.588334720483076]
This paper is concerned with learning the prior model from data, in particular, learning the prior from multiple realizations of indirect data obtained through the noisy observation process.
An efficient residual-based neural operator approximation of the forward model is proposed and it is shown that this may be learned concurrently with the pushforward map.
arXiv Detail & Related papers (2024-05-28T08:34:41Z) - Learning on manifolds without manifold learning [0.0]
Function approximation based on data drawn randomly from an unknown distribution is an important problem in machine learning.
In this paper, we project the unknown manifold as a submanifold ambient hypersphere and study the question of constructing a one-shot approximation using specially designed kernels on the hypersphere.
arXiv Detail & Related papers (2024-02-20T03:27:53Z) - A Heat Diffusion Perspective on Geodesic Preserving Dimensionality
Reduction [66.21060114843202]
We propose a more general heat kernel based manifold embedding method that we call heat geodesic embeddings.
Results show that our method outperforms existing state of the art in preserving ground truth manifold distances.
We also showcase our method on single cell RNA-sequencing datasets with both continuum and cluster structure.
arXiv Detail & Related papers (2023-05-30T13:58:50Z) - Joint Embedding Self-Supervised Learning in the Kernel Regime [21.80241600638596]
Self-supervised learning (SSL) produces useful representations of data without access to any labels for classifying the data.
We extend this framework to incorporate algorithms based on kernel methods where embeddings are constructed by linear maps acting on the feature space of a kernel.
We analyze our kernel model on small datasets to identify common features of self-supervised learning algorithms and gain theoretical insights into their performance on downstream tasks.
arXiv Detail & Related papers (2022-09-29T15:53:19Z) - The Manifold Hypothesis for Gradient-Based Explanations [55.01671263121624]
gradient-based explanation algorithms provide perceptually-aligned explanations.
We show that the more a feature attribution is aligned with the tangent space of the data, the more perceptually-aligned it tends to be.
We suggest that explanation algorithms should actively strive to align their explanations with the data manifold.
arXiv Detail & Related papers (2022-06-15T08:49:24Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels
Methods [0.0]
We show the importance of a data-dependent feature extraction step that is key to the obtain good performance in convolutional kernel methods.
We scale this method to the challenging ImageNet dataset, showing such a simple approach can exceed all existing non-learned representation methods.
arXiv Detail & Related papers (2021-01-19T09:30:58Z) - Learning Manifold Implicitly via Explicit Heat-Kernel Learning [63.354671267760516]
We propose the concept of implicit manifold learning, where manifold information is implicitly obtained by learning the associated heat kernel.
The learned heat kernel can be applied to various kernel-based machine learning models, including deep generative models (DGM) for data generation and Stein Variational Gradient Descent for Bayesian inference.
arXiv Detail & Related papers (2020-10-05T03:39:58Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.