TCGAN: Convolutional Generative Adversarial Network for Time Series
Classification and Clustering
- URL: http://arxiv.org/abs/2309.04732v1
- Date: Sat, 9 Sep 2023 09:33:25 GMT
- Title: TCGAN: Convolutional Generative Adversarial Network for Time Series
Classification and Clustering
- Authors: Fanling Huang and Yangdong Deng
- Abstract summary: Time-series Convolutional GAN (TCGAN) learns by playing an adversarial game between two one-dimensional CNNs.
The results demonstrate that TCGAN is faster and more accurate than existing time-series GANs.
- Score: 3.2474405288441544
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent works have demonstrated the superiority of supervised Convolutional
Neural Networks (CNNs) in learning hierarchical representations from time
series data for successful classification. These methods require sufficiently
large labeled data for stable learning, however acquiring high-quality labeled
time series data can be costly and potentially infeasible. Generative
Adversarial Networks (GANs) have achieved great success in enhancing
unsupervised and semi-supervised learning. Nonetheless, to our best knowledge,
it remains unclear how effectively GANs can serve as a general-purpose solution
to learn representations for time series recognition, i.e., classification and
clustering. The above considerations inspire us to introduce a Time-series
Convolutional GAN (TCGAN). TCGAN learns by playing an adversarial game between
two one-dimensional CNNs (i.e., a generator and a discriminator) in the absence
of label information. Parts of the trained TCGAN are then reused to construct a
representation encoder to empower linear recognition methods. We conducted
comprehensive experiments on synthetic and real-world datasets. The results
demonstrate that TCGAN is faster and more accurate than existing time-series
GANs. The learned representations enable simple classification and clustering
methods to achieve superior and stable performance. Furthermore, TCGAN retains
high efficacy in scenarios with few-labeled and imbalanced-labeled data. Our
work provides a promising path to effectively utilize abundant unlabeled time
series data.
Related papers
- Time Series Representation Learning with Supervised Contrastive Temporal Transformer [8.223940676615857]
We develop a simple, yet novel fusion model, called: textbfSupervised textbfCOntrastive textbfTemporal textbfTransformer (SCOTT)
We first investigate suitable augmentation methods for various types of time series data to assist with learning change-invariant representations.
arXiv Detail & Related papers (2024-03-16T03:37:19Z) - Multi-class Temporal Logic Neural Networks [8.20828081284034]
Time-series data can represent the behaviors of autonomous systems, such as drones and self-driving cars.
We propose a method that combines neural networks that represent STL specifications for multi-class classification of time-series data.
We evaluate our method on two datasets and compare it with state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-17T00:22:29Z) - Investigating Power laws in Deep Representation Learning [4.996066540156903]
We propose a framework to evaluate the quality of representations in unlabelled datasets.
We estimate the coefficient of the power law, $alpha$, across three key attributes which influence representation learning.
Notably, $alpha$ is computable from the representations without knowledge of any labels, thereby offering a framework to evaluate the quality of representations in unlabelled datasets.
arXiv Detail & Related papers (2022-02-11T18:11:32Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.