A Functional approach for Two Way Dimension Reduction in Time Series
- URL: http://arxiv.org/abs/2301.00357v1
- Date: Sun, 1 Jan 2023 06:09:15 GMT
- Title: A Functional approach for Two Way Dimension Reduction in Time Series
- Authors: Aniruddha Rajendra Rao, Haiyan Wang, Chetan Gupta
- Abstract summary: We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder.
Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed.
- Score: 13.767812547998735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise in data has led to the need for dimension reduction techniques,
especially in the area of non-scalar variables, including time series, natural
language processing, and computer vision. In this paper, we specifically
investigate dimension reduction for time series through functional data
analysis. Current methods for dimension reduction in functional data are
functional principal component analysis and functional autoencoders, which are
limited to linear mappings or scalar representations for the time series, which
is inefficient. In real data applications, the nature of the data is much more
complex. We propose a non-linear function-on-function approach, which consists
of a functional encoder and a functional decoder, that uses continuous hidden
layers consisting of continuous neurons to learn the structure inherent in
functional data, which addresses the aforementioned concerns in the existing
approaches. Our approach gives a low dimension latent representation by
reducing the number of functional features as well as the timepoints at which
the functions are observed. The effectiveness of the proposed model is
demonstrated through multiple simulations and real data examples.
Related papers
- Functional Autoencoder for Smoothing and Representation Learning [0.0]
We propose to learn the nonlinear representations of functional data using neural network autoencoders designed to process data in the form it is usually collected without the need of preprocessing.
We design the encoder to employ a projection layer computing the weighted inner product of the functional data and functional weights over the observed timestamp, and the decoder to apply a recovery layer that maps the finite-dimensional vector extracted from the functional data back to functional space.
arXiv Detail & Related papers (2024-01-17T08:33:25Z) - Nonlinear functional regression by functional deep neural network with
kernel embedding [20.306390874610635]
We propose a functional deep neural network with an efficient and fully data-dependent dimension reduction method.
The architecture of our functional net consists of a kernel embedding step, a projection step, and a deep ReLU neural network for the prediction.
The utilization of smooth kernel embedding enables our functional net to be discretization invariant, efficient, and robust to noisy observations.
arXiv Detail & Related papers (2024-01-05T16:43:39Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Functional Nonlinear Learning [0.0]
We propose a functional nonlinear learning (FunNoL) method to represent multivariate functional data in a lower-dimensional feature space.
We show that FunNoL provides satisfactory curve classification and reconstruction regardless of data sparsity.
arXiv Detail & Related papers (2022-06-22T23:47:45Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z) - Multiway Non-rigid Point Cloud Registration via Learned Functional Map
Synchronization [105.14877281665011]
We present SyNoRiM, a novel way to register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds.
We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy.
arXiv Detail & Related papers (2021-11-25T02:37:59Z) - Efficient Multidimensional Functional Data Analysis Using Marginal
Product Basis Systems [2.4554686192257424]
We propose a framework for learning continuous representations from a sample of multidimensional functional data.
We show that the resulting estimation problem can be solved efficiently by the tensor decomposition.
We conclude with a real data application in neuroimaging.
arXiv Detail & Related papers (2021-07-30T16:02:15Z) - Amplitude Mean of Functional Data on $\mathbb{S}^2$ [5.584060970507506]
Mainfold-valued functional data analysis (FDA) recently becomes an active area of research motivated by the raising availability of trajectories or longitudinal data.
In this paper, we study the amplitude part of manifold-valued functions on $mathbbS2$, which is invariant to random time warping.
We develop a set of efficient and accurate tools for temporal alignment of functions, geodesic and sample mean calculation.
arXiv Detail & Related papers (2021-07-29T03:11:26Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.