Preserving Seasonal and Trend Information: A Variational Autoencoder-Latent Space Arithmetic Based Approach for Non-stationary Learning
- URL: http://arxiv.org/abs/2504.18819v1
- Date: Sat, 26 Apr 2025 06:29:06 GMT
- Title: Preserving Seasonal and Trend Information: A Variational Autoencoder-Latent Space Arithmetic Based Approach for Non-stationary Learning
- Authors: Hassan Wasswa, Aziida Nanyonga, Timothy Lynar,
- Abstract summary: This research proposes a method for enforcing stationary behaviour within the latent space while preserving trend and seasonal information.<n>The approach's ability to preserve trend and seasonal information was evaluated on two time-series non-stationary datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI models have garnered significant research attention towards predictive task automation. However, a stationary training environment is an underlying assumption for most models and such models simply do not work on non-stationary data since a stationary relationship is learned. The existing solutions propose making data stationary prior to model training and evaluation. This leads to loss of trend and seasonal patterns which are vital components for learning temporal dependencies of the system under study. This research aims to address this limitation by proposing a method for enforcing stationary behaviour within the latent space while preserving trend and seasonal information. The method deploys techniques including Differencing, Time-series decomposition, and Latent Space Arithmetic (LSA), to learn information vital for efficient approximation of trend and seasonal information which is then stored as embeddings within the latent space of a Variational Autoencoder (VAE). The approach's ability to preserve trend and seasonal information was evaluated on two time-series non-stationary datasets. For predictive performance evaluation, four deep learning models were trained on the latent vector representations of the datasets after application of the proposed method and all models produced competitive results in comparison with state-of-the-art techniques using RMSE as the performance metric.
Related papers
- An Efficient Continual Learning Framework for Multivariate Time Series Prediction Tasks with Application to Vehicle State Estimation [11.197937792525684]
We present EM-ReSeleCT, an enhanced approach to handle continual learning in time series tasks.<n>Our approach strategically selects representative subsets from old and historical data.<n>We also develop a sequence-to-sequence transformer model (autoregressive model) specifically designed for vehicle state estimation.
arXiv Detail & Related papers (2025-03-03T15:42:06Z) - Capturing the Temporal Dependence of Training Data Influence [100.91355498124527]
We formalize the concept of trajectory-specific leave-one-out influence, which quantifies the impact of removing a data point during training.<n>We propose data value embedding, a novel technique enabling efficient approximation of trajectory-specific LOO.<n>As data value embedding captures training data ordering, it offers valuable insights into model training dynamics.
arXiv Detail & Related papers (2024-12-12T18:28:55Z) - Topological Approach for Data Assimilation [0.4972323953932129]
We introduce a new data assimilation algorithm with a foundation in topological data analysis.<n>By leveraging the differentiability of functions of persistence, gradient descent optimization is used to minimize topological differences between measurements and forecast predictions.
arXiv Detail & Related papers (2024-11-12T20:24:46Z) - On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Data Shapley in One Training Run [88.59484417202454]
Data Shapley provides a principled framework for attributing data's contribution within machine learning contexts.
Existing approaches require re-training models on different data subsets, which is computationally intensive.
This paper introduces In-Run Data Shapley, which addresses these limitations by offering scalable data attribution for a target model of interest.
arXiv Detail & Related papers (2024-06-16T17:09:24Z) - Distilled Datamodel with Reverse Gradient Matching [74.75248610868685]
We introduce an efficient framework for assessing data impact, comprising offline training and online evaluation stages.
Our proposed method achieves comparable model behavior evaluation while significantly speeding up the process compared to the direct retraining method.
arXiv Detail & Related papers (2024-04-22T09:16:14Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Survey of Federated Learning Models for Spatial-Temporal Mobility
Applications [9.896508514316812]
Federated learning (FL) can serve as an ideal candidate for training spatial temporal models.
There are unique challenges involved with transitioning existing spatial temporal models to decentralized learning.
arXiv Detail & Related papers (2023-05-09T08:26:48Z) - iSAGE: An Incremental Version of SAGE for Online Explanation on Data
Streams [8.49072000414555]
iSAGE is a time- and memory-efficient incrementalization of SAGE.
We show that iSAGE adheres to similar theoretical properties as SAGE.
arXiv Detail & Related papers (2023-03-02T11:51:54Z) - A Meta-learning Approach to Reservoir Computing: Time Series Prediction
with Limited Data [0.0]
We present a data-driven approach to automatically extract an appropriate model structure from experimentally observed processes.
We demonstrate our approach on a simple benchmark problem, where it beats the state of the art meta-learning techniques.
arXiv Detail & Related papers (2021-10-07T18:23:14Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.