Multivariate Time Series Early Classification Across Channel and Time
Dimensions
- URL: http://arxiv.org/abs/2306.14606v1
- Date: Mon, 26 Jun 2023 11:30:33 GMT
- Title: Multivariate Time Series Early Classification Across Channel and Time
Dimensions
- Authors: Leonardos Pantiskas, Kees Verstoep, Mark Hoogendoorn, Henri Bal
- Abstract summary: We propose a more flexible early classification pipeline that offers a more granular consideration of input channels.
Our method can enhance the early classification paradigm by achieving improved accuracy for equal input utilization.
- Score: 3.5786621294068373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, the deployment of deep learning models on edge devices for
addressing real-world classification problems is becoming more prevalent.
Moreover, there is a growing popularity in the approach of early
classification, a technique that involves classifying the input data after
observing only an early portion of it, aiming to achieve reduced communication
and computation requirements, which are crucial parameters in edge intelligence
environments. While early classification in the field of time series analysis
has been broadly researched, existing solutions for multivariate time series
problems primarily focus on early classification along the temporal dimension,
treating the multiple input channels in a collective manner. In this study, we
propose a more flexible early classification pipeline that offers a more
granular consideration of input channels and extends the early classification
paradigm to the channel dimension. To implement this method, we utilize
reinforcement learning techniques and introduce constraints to ensure the
feasibility and practicality of our objective. To validate its effectiveness,
we conduct experiments using synthetic data and we also evaluate its
performance on real datasets. The comprehensive results from our experiments
demonstrate that, for multiple datasets, our method can enhance the early
classification paradigm by achieving improved accuracy for equal input
utilization.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - An End-to-End Model for Time Series Classification In the Presence of Missing Values [25.129396459385873]
Time series classification with missing data is a prevalent issue in time series analysis.
This study proposes an end-to-end neural network that unifies data imputation and representation learning within a single framework.
arXiv Detail & Related papers (2024-08-11T19:39:12Z) - Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot
Filling [2.6056468338837457]
Slot filling poses a critical challenge to handle a novel domain whose samples are never seen during training.
Most prior works deal with this problem in a two-pass pipeline manner based on metric learning.
We propose a new adaptive end-to-end metric learning scheme for the challenging zero-shot slot filling.
arXiv Detail & Related papers (2023-10-23T19:01:16Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - A Deep Dive into Deep Cluster [0.2578242050187029]
DeepCluster is a simple and scalable unsupervised pretraining of visual representations.
We show that DeepCluster convergence and performance depend on the interplay between the quality of the randomly filters of the convolutional layer and the selected number of clusters.
arXiv Detail & Related papers (2022-07-24T22:55:09Z) - Data Augmentation techniques in time series domain: A survey and
taxonomy [0.20971479389679332]
Deep neural networks used to work with time series heavily depend on the size and consistency of the datasets used in training.
This work systematically reviews the current state-of-the-art in the area to provide an overview of all available algorithms.
The ultimate aim of this study is to provide a summary of the evolution and performance of areas that produce better results to guide future researchers in this field.
arXiv Detail & Related papers (2022-06-25T17:09:00Z) - Early Time-Series Classification Algorithms: An Empirical Comparison [59.82930053437851]
Early Time-Series Classification (ETSC) is the task of predicting the class of incoming time-series by observing as few measurements as possible.
We evaluate six existing ETSC algorithms on publicly available data, as well as on two newly introduced datasets.
arXiv Detail & Related papers (2022-03-03T10:43:56Z) - PatchX: Explaining Deep Models by Intelligible Pattern Patches for
Time-series Classification [6.820831423843006]
We propose a novel hybrid approach that utilizes deep neural networks and traditional machine learning algorithms.
Our method first performs a fine-grained classification for the patches followed by sample level classification.
arXiv Detail & Related papers (2021-02-11T10:08:09Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Fine-Grain Few-Shot Vision via Domain Knowledge as Hyperspherical Priors [79.22051549519989]
Prototypical networks have been shown to perform well at few-shot learning tasks in computer vision.
We show how we can achieve few-shot fine-grain classification by maximally separating the classes while incorporating domain knowledge as informative priors.
arXiv Detail & Related papers (2020-05-23T02:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.