TimeAutoML: Autonomous Representation Learning for Multivariate
Irregularly Sampled Time Series
- URL: http://arxiv.org/abs/2010.01596v1
- Date: Sun, 4 Oct 2020 15:01:46 GMT
- Title: TimeAutoML: Autonomous Representation Learning for Multivariate
Irregularly Sampled Time Series
- Authors: Yang Jiao, Kai Yang, Shaoyu Dou, Pan Luo, Sijia Liu, Dongjin Song
- Abstract summary: We propose an autonomous representation learning approach for multivariate time series (TimeAutoML) with irregular sampling rates and variable lengths.
Extensive empirical studies on real-world datasets demonstrate that the proposed TimeAutoML outperforms competing approaches on various tasks by a large margin.
- Score: 27.0506649441212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multivariate time series (MTS) data are becoming increasingly ubiquitous in
diverse domains, e.g., IoT systems, health informatics, and 5G networks. To
obtain an effective representation of MTS data, it is not only essential to
consider unpredictable dynamics and highly variable lengths of these data but
also important to address the irregularities in the sampling rates of MTS.
Existing parametric approaches rely on manual hyperparameter tuning and may
cost a huge amount of labor effort. Therefore, it is desirable to learn the
representation automatically and efficiently. To this end, we propose an
autonomous representation learning approach for multivariate time series
(TimeAutoML) with irregular sampling rates and variable lengths. As opposed to
previous works, we first present a representation learning pipeline in which
the configuration and hyperparameter optimization are fully automatic and can
be tailored for various tasks, e.g., anomaly detection, clustering, etc. Next,
a negative sample generation approach and an auxiliary classification task are
developed and integrated within TimeAutoML to enhance its representation
capability. Extensive empirical studies on real-world datasets demonstrate that
the proposed TimeAutoML outperforms competing approaches on various tasks by a
large margin. In fact, it achieves the best anomaly detection performance among
all comparison algorithms on 78 out of all 85 UCR datasets, acquiring up to 20%
performance improvement in terms of AUC score.
Related papers
- SONNET: Enhancing Time Delay Estimation by Leveraging Simulated Audio [17.811771707446926]
We show that learning based methods can, even based on synthetic data, significantly outperform GCC-PHAT on novel real world data.
We provide our trained model, SONNET, which is runnable in real-time and works on novel data out of the box for many real data applications.
arXiv Detail & Related papers (2024-11-20T10:23:21Z) - UniTS: A Unified Multi-Task Time Series Model [31.675845788410246]
UniTS is a unified multi-task time series model that integrates predictive and generative tasks into a single framework.
UniTS is tested on 38 datasets across human activity sensors, healthcare, engineering, and finance.
arXiv Detail & Related papers (2024-02-29T21:25:58Z) - SMORE: Similarity-based Hyperdimensional Domain Adaptation for
Multi-Sensor Time Series Classification [17.052624039805856]
We propose SMORE, a novel resource-efficient domain adaptation (DA) algorithm for multi-sensor time series classification.
SMORE achieves on average 1.98% higher accuracy than state-of-the-art (SOTA) DNN-based DA algorithms with 18.81x faster training and 4.63x faster inference.
arXiv Detail & Related papers (2024-02-20T18:48:49Z) - AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - AutoML-GPT: Automatic Machine Learning with GPT [74.30699827690596]
We propose developing task-oriented prompts and automatically utilizing large language models (LLMs) to automate the training pipeline.
We present the AutoML-GPT, which employs GPT as the bridge to diverse AI models and dynamically trains models with optimized hyper parameters.
This approach achieves remarkable results in computer vision, natural language processing, and other challenging areas.
arXiv Detail & Related papers (2023-05-04T02:09:43Z) - Self-Supervised Representation Learning from Temporal Ordering of
Automated Driving Sequences [49.91741677556553]
We propose TempO, a temporal ordering pretext task for pre-training region-level feature representations for perception tasks.
We embed each frame by an unordered set of proposal feature vectors, a representation that is natural for object detection or tracking systems.
Extensive evaluations on the BDD100K, nuImages, and MOT17 datasets show that our TempO pre-training approach outperforms single-frame self-supervised learning methods.
arXiv Detail & Related papers (2023-02-17T18:18:27Z) - Enhancing Transformer Efficiency for Multivariate Time Series
Classification [12.128991867050487]
We propose a methodology to investigate the relationship between model efficiency and accuracy, as well as its complexity.
Comprehensive experiments on benchmark MTS datasets illustrate the effectiveness of our method.
arXiv Detail & Related papers (2022-03-28T03:25:19Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - PSEUDo: Interactive Pattern Search in Multivariate Time Series with
Locality-Sensitive Hashing and Relevance Feedback [3.347485580830609]
PSEUDo is an adaptive feature learning technique for exploring visual patterns in multi-track sequential data.
Our algorithm features sub-linear training and inference time.
We demonstrate superiority of PSEUDo in terms of efficiency, accuracy, and steerability.
arXiv Detail & Related papers (2021-04-30T13:00:44Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.