Investigating a Model-Agnostic and Imputation-Free Approach for Irregularly-Sampled Multivariate Time-Series Modeling
- URL: http://arxiv.org/abs/2502.15785v2
- Date: Wed, 03 Sep 2025 03:26:43 GMT
- Title: Investigating a Model-Agnostic and Imputation-Free Approach for Irregularly-Sampled Multivariate Time-Series Modeling
- Authors: Abhilash Neog, Arka Daw, Sepideh Fatemi Khorasgani, Medha Sawhney, Aanish Pradhan, Mary E. Lofton, Bennett J. McAfee, Adrienne Breef-Pilz, Heather L. Wander, Dexter W Howard, Cayelan C. Carey, Paul Hanson, Anuj Karpatne,
- Abstract summary: Missing Feature-aware Time Series Modeling (MissTSM) is a novel model-agnostic and imputation-free approach for IMTS modeling.<n>We show that MissTSM shows competitive performance compared to other IMTS approaches.
- Score: 6.76884948948117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling Irregularly-sampled and Multivariate Time Series (IMTS) is crucial across a variety of applications where different sets of variates may be missing at different time-steps due to sensor malfunctions or high data acquisition costs. Existing approaches for IMTS either consider a two-stage impute-then-model framework or involve specialized architectures specific to a particular model and task. We perform a series of experiments to derive novel insights about the performance of IMTS methods on a variety of semi-synthetic and real-world datasets for both classification and forecasting. We also introduce Missing Feature-aware Time Series Modeling (MissTSM) or MissTSM, a novel model-agnostic and imputation-free approach for IMTS modeling. We show that MissTSM shows competitive performance compared to other IMTS approaches, especially when the amount of missing values is large and the data lacks simplistic periodic structures - conditions common to real-world IMTS applications.
Related papers
- Multi-Scale Finetuning for Encoder-based Time Series Foundation Models [67.95907033226585]
Time series foundation models (TSFMs) demonstrate impressive zero-shot performance for time series forecasting.<n>While naive finetuning can yield performance gains, we argue that it falls short of fully leveraging TSFMs' capabilities.<n>We propose Multiscale finetuning (MSFT), a simple yet general framework that explicitly integrates multi-scale modeling into the finetuning process.
arXiv Detail & Related papers (2025-06-17T01:06:01Z) - IMTS is Worth Time $\ imes$ Channel Patches: Visual Masked Autoencoders for Irregular Multivariate Time Series Prediction [9.007111482874135]
We propose VIMTS, a framework adapting Visual MAE for IMTS forecasting.<n>To mitigate the effect of missing values, VIMTS first processes IMTS along the timeline into feature patches at equal intervals.<n>It then leverages visual MAE's capability in handling sparse multichannel data for patch reconstruction, followed by a coarse-to-fine technique to generate precise predictions.
arXiv Detail & Related papers (2025-05-28T19:44:03Z) - VISTA: Unsupervised 2D Temporal Dependency Representations for Time Series Anomaly Detection [42.694234312755285]
Time Series Anomaly Detection (TSAD) is essential for uncovering rare and potentially harmful events in unlabeled time series data.
We introduce VISTA, a training-free, unsupervised TSAD algorithm designed to overcome these challenges.
arXiv Detail & Related papers (2025-04-03T11:20:49Z) - Mixing It Up: Exploring Mixer Networks for Irregular Multivariate Time Series Forecasting [9.642976236410833]
We introduce IMTS-Mixer, a novel forecasting architecture designed specifically for IMTS.<n>Our approach retains the core principles of TS mixer models while introducing innovative methods to transform IMTS into fixed-size matrix representations.<n>Our results demonstrate that IMTS-Mixer establishes a new state-of-the-art in forecasting accuracy while also improving computational efficiency.
arXiv Detail & Related papers (2025-02-17T14:06:36Z) - General Time-series Model for Universal Knowledge Representation of Multivariate Time-Series data [61.163542597764796]
We show that time series with different time granularities (or corresponding frequency resolutions) exhibit distinct joint distributions in the frequency domain.<n>A novel Fourier knowledge attention mechanism is proposed to enable learning time-aware representations from both the temporal and frequency domains.<n>An autoregressive blank infilling pre-training framework is incorporated to time series analysis for the first time, leading to a generative tasks agnostic pre-training strategy.
arXiv Detail & Related papers (2025-02-05T15:20:04Z) - BRATI: Bidirectional Recurrent Attention for Time-Series Imputation [0.14999444543328289]
Missing data in time-series analysis poses significant challenges, affecting the reliability of downstream applications.<n>This paper introduces BRATI, a novel deep-learning model designed to address multivariate time-series imputation.<n>BRATI processes temporal dependencies and feature correlations across long and short time horizons, utilizing two imputation blocks that operate in opposite temporal directions.
arXiv Detail & Related papers (2025-01-09T17:50:56Z) - DiffImp: Efficient Diffusion Model for Probabilistic Time Series Imputation with Bidirectional Mamba Backbone [6.428451261614519]
Current DDPM-based probabilistic time series imputation methodologies are confronted with two types of challenges.
We integrate the computational efficient state space model, namely Mamba, as the backbone denosing module for DDPMs.
Our approach can achieve state-of-the-art time series imputation results on multiple datasets, different missing scenarios and missing ratios.
arXiv Detail & Related papers (2024-10-17T08:48:52Z) - LTSM-Bundle: A Toolbox and Benchmark on Large Language Models for Time Series Forecasting [69.33802286580786]
We introduce LTSM-Bundle, a comprehensive toolbox, and benchmark for training LTSMs.<n>It modularized and benchmarked LTSMs from multiple dimensions, encompassing prompting strategies, tokenization approaches, base model selection, data quantity, and dataset diversity.<n> Empirical results demonstrate that this combination achieves superior zero-shot and few-shot performances compared to state-of-the-art LTSMs and traditional TSF methods.
arXiv Detail & Related papers (2024-06-20T07:09:19Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - Scalable Numerical Embeddings for Multivariate Time Series: Enhancing Healthcare Data Representation Learning [6.635084843592727]
We propose SCAlable Numerical Embedding (SCANE), a novel framework that treats each feature value as an independent token.
SCANE regularizes the traits of distinct feature embeddings and enhances representational learning through a scalable embedding mechanism.
We develop the nUMerical eMbeddIng Transformer (SUMMIT), which is engineered to deliver precise predictive outputs for MTS characterized by prevalent missing entries.
arXiv Detail & Related papers (2024-05-26T13:06:45Z) - Time-SSM: Simplifying and Unifying State Space Models for Time Series Forecasting [22.84798547604491]
State Space Models (SSMs) approximate continuous systems using a set of basis functions and discretize them to handle input data.
This paper proposes a novel theoretical framework termed Dynamic Spectral Operator, offering more intuitive and general guidance on applying SSMs to time series data.
We introduce Time-SSM, a novel SSM-based foundation model with only one-seventh of the parameters compared to Mamba.
arXiv Detail & Related papers (2024-05-25T17:42:40Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - UniTS: A Unified Multi-Task Time Series Model [31.675845788410246]
UniTS is a unified multi-task time series model that integrates predictive and generative tasks into a single framework.
UniTS is tested on 38 datasets across human activity sensors, healthcare, engineering, and finance.
arXiv Detail & Related papers (2024-02-29T21:25:58Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Gait Recognition in the Wild with Multi-hop Temporal Switch [81.35245014397759]
gait recognition in the wild is a more practical problem that has attracted the attention of the community of multimedia and computer vision.
This paper presents a novel multi-hop temporal switch method to achieve effective temporal modeling of gait patterns in real-world scenes.
arXiv Detail & Related papers (2022-09-01T10:46:09Z) - Hidden Parameter Recurrent State Space Models For Changing Dynamics
Scenarios [18.08665164701404]
Recurrent State-space models assume that the dynamics are fixed and unchanging, which is rarely the case in real-world scenarios.
We introduce the Hidden Recurrent State Space Models (HiP- RSSMs), a framework that parametrizes a family of related dynamical systems with a low-dimensional set of latent factors.
We show that HiP- RSSMs outperforms RSSMs and competing multi-task models on several challenging robotic benchmarks both on real-world systems and simulations.
arXiv Detail & Related papers (2022-06-29T14:54:49Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - SAITS: Self-Attention-based Imputation for Time Series [6.321652307514677]
SAITS is a novel method based on the self-attention mechanism for missing value imputation in time series.
It learns missing values from a weighted combination of two diagonally-masked self-attention blocks.
Tests show SAITS outperforms state-of-the-art methods on the time-series imputation task efficiently.
arXiv Detail & Related papers (2022-02-17T08:40:42Z) - LIFE: Learning Individual Features for Multivariate Time Series
Prediction with Missing Values [71.52335136040664]
We propose a Learning Individual Features (LIFE) framework, which provides a new paradigm for MTS prediction with missing values.
LIFE generates reliable features for prediction by using the correlated dimensions as auxiliary information and suppressing the interference from uncorrelated dimensions with missing values.
Experiments on three real-world data sets verify the superiority of LIFE to existing state-of-the-art models.
arXiv Detail & Related papers (2021-09-30T04:53:24Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.