Plotting time: On the usage of CNNs for time series classification
- URL: http://arxiv.org/abs/2102.04179v1
- Date: Mon, 8 Feb 2021 13:23:01 GMT
- Title: Plotting time: On the usage of CNNs for time series classification
- Authors: Nuno M. Rodrigues, Jo\~ao E. Batista, Leonardo Trujillo, Bernardo
Duarte, Mario Giacobini, Leonardo Vanneschi, Sara Silva
- Abstract summary: We present a novel approach for time series classification where we represent time series data as plot images and feed them to a simple CNN.
Our approach is very promising, achieving the best results on both real-world datasets and matching / beating the best state-of-the-art methods in six UCR datasets.
- Score: 1.0390583509657398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel approach for time series classification where we represent
time series data as plot images and feed them to a simple CNN, outperforming
several state-of-the-art methods. We propose a simple and highly replicable way
of plotting the time series, and feed these images as input to a non-optimized
shallow CNN, without any normalization or residual connections. These
representations are no more than default line plots using the time series data,
where the only pre-processing applied is to reduce the number of white pixels
in the image. We compare our method with different state-of-the-art methods
specialized in time series classification on two real-world non public
datasets, as well as 98 datasets of the UCR dataset collection. The results
show that our approach is very promising, achieving the best results on both
real-world datasets and matching / beating the best state-of-the-art methods in
six UCR datasets. We argue that, if a simple naive design like ours can obtain
such good results, it is worth further exploring the capabilities of using
image representation of time series data, along with more powerful CNNs, for
classification and other related tasks.
Related papers
- Image Clustering via the Principle of Rate Reduction in the Age of Pretrained Models [37.574691902971296]
We propose a novel image clustering pipeline that leverages the powerful feature representation of large pre-trained models.
We show that our pipeline works well on standard datasets such as CIFAR-10, CIFAR-100, and ImageNet-1k.
arXiv Detail & Related papers (2023-06-08T15:20:27Z) - LB-SimTSC: An Efficient Similarity-Aware Graph Neural Network for
Semi-Supervised Time Series Classification [4.7828959446344275]
We propose a new efficient semi-supervised time series classification technique, LB-SimTSC, with a new graph construction module.
We construct the pairwise distance matrix using LB_Keogh and build a graph for the graph neural network.
Results demonstrate that this approach can be up to 104x faster than SimTSC when constructing the graph on large datasets.
arXiv Detail & Related papers (2023-01-12T06:49:55Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Scrape, Cut, Paste and Learn: Automated Dataset Generation Applied to
Parcel Logistics [58.720142291102135]
We present a fully automated pipeline to generate a synthetic dataset for instance segmentation in four steps.
We first scrape images for the objects of interest from popular image search engines.
We compare three different methods for image selection: Object-agnostic pre-processing, manual image selection and CNN-based image selection.
arXiv Detail & Related papers (2022-10-18T12:49:04Z) - STING: Self-attention based Time-series Imputation Networks using GAN [4.052758394413726]
STING (Self-attention based Time-series Imputation Networks using GAN) is proposed.
We take advantage of generative adversarial networks and bidirectional recurrent neural networks to learn latent representations of the time series.
Experimental results on three real-world datasets demonstrate that STING outperforms the existing state-of-the-art methods in terms of imputation accuracy.
arXiv Detail & Related papers (2022-09-22T06:06:56Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Tune It or Don't Use It: Benchmarking Data-Efficient Image
Classification [9.017660524497389]
We design a benchmark for data-efficient image classification consisting of six diverse datasets spanning various domains.
We re-evaluate the standard cross-entropy baseline and eight methods for data-efficient deep learning published between 2017 and 2021 at renowned venues.
tuning learning rate, weight decay, and batch size on a separate validation split results in a highly competitive baseline.
arXiv Detail & Related papers (2021-08-30T11:24:51Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - Visualising Deep Network's Time-Series Representations [93.73198973454944]
Despite the popularisation of machine learning models, more often than not they still operate as black boxes with no insight into what is happening inside the model.
In this paper, a method that addresses that issue is proposed, with a focus on visualising multi-dimensional time-series data.
Experiments on a high-frequency stock market dataset show that the method provides fast and discernible visualisations.
arXiv Detail & Related papers (2021-03-12T09:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.