Generalized Representations Learning for Time Series Classification
- URL: http://arxiv.org/abs/2209.07027v1
- Date: Thu, 15 Sep 2022 03:36:31 GMT
- Title: Generalized Representations Learning for Time Series Classification
- Authors: Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xing Xie
- Abstract summary: We argue that the temporal complexity attributes to the unknown latent distributions within time series classification.
We present experiments on gesture recognition, speech commands recognition, wearable stress and affect detection, and sensor-based human activity recognition.
- Score: 28.230863650758447
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series classification is an important problem in real world. Due to its
non-stationary property that the distribution changes over time, it remains
challenging to build models for generalization to unseen distributions. In this
paper, we propose to view the time series classification problem from the
distribution perspective. We argue that the temporal complexity attributes to
the unknown latent distributions within. To this end, we propose DIVERSIFY to
learn generalized representations for time series classification. DIVERSIFY
takes an iterative process: it first obtains the worst-case distribution
scenario via adversarial training, then matches the distributions of the
obtained sub-domains. We also present some theoretical insights. We conduct
experiments on gesture recognition, speech commands recognition, wearable
stress and affect detection, and sensor-based human activity recognition with a
total of seven datasets in different settings. Results demonstrate that
DIVERSIFY significantly outperforms other baselines and effectively
characterizes the latent distributions by qualitative and quantitative
analysis.
Related papers
- Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing [55.791818510796645]
We aim to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data.
Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge.
We adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain.
arXiv Detail & Related papers (2024-10-08T12:26:48Z) - Continual Learning of Nonlinear Independent Representations [17.65617189829692]
We show that model identifiability progresses from a subspace level to a component-wise level as the number of distributions increases.
Our method achieves performance comparable to nonlinear ICA methods trained jointly on multiple offline distributions.
arXiv Detail & Related papers (2024-08-11T14:33:37Z) - Continuous Invariance Learning [37.5006565403112]
We show that existing invariance learning methods can fail for continuous domain problems.
We propose Continuous Invariance Learning (CIL), which extracts invariant features across continuously indexed domains.
CIL consistently outperforms strong baselines among all the tasks.
arXiv Detail & Related papers (2023-10-09T02:18:45Z) - DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization [58.704753031608625]
Time series is one of the most challenging modalities in machine learning research.
OOD detection and generalization on time series tend to suffer due to its non-stationary property.
We propose DIVERSIFY, a framework for OOD detection and generalization on dynamic distributions of time series.
arXiv Detail & Related papers (2023-08-04T12:27:11Z) - Concept Drift and Long-Tailed Distribution in Fine-Grained Visual Categorization: Benchmark and Method [84.68818879525568]
We present a Concept Drift and Long-Tailed Distribution dataset.
The characteristics of instances tend to vary with time and exhibit a long-tailed distribution.
We propose a feature recombination framework to address the learning challenges associated with CDLT.
arXiv Detail & Related papers (2023-06-04T12:42:45Z) - Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time [69.77704012415845]
Temporal shifts can considerably degrade performance of machine learning models deployed in the real world.
We benchmark 13 prior approaches, including methods in domain generalization, continual learning, self-supervised learning, and ensemble learning.
Under both evaluation strategies, we observe an average performance drop of 20% from in-distribution to out-of-distribution data.
arXiv Detail & Related papers (2022-11-25T17:07:53Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed
Distributions [6.356866333887868]
We show that existing GAN architectures do a poor job of matching the behavior of heavy-tailed distributions.
We use extreme value theory and the functional properties of neural networks to learn a distribution that matches the behavior of the adversarial features.
arXiv Detail & Related papers (2021-01-22T14:06:02Z) - Explainable Multivariate Time Series Classification: A Deep Neural
Network Which Learns To Attend To Important Variables As Well As Informative
Time Intervals [32.30627405832656]
Time series data is prevalent in a wide variety of real-world applications.
Key criterion to understand such predictive models involves elucidating and quantifying the contribution of time varying input variables to the classification.
We introduce a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classification output.
arXiv Detail & Related papers (2020-11-23T19:16:46Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.