Unsupervised Visual Time-Series Representation Learning and Clustering
- URL: http://arxiv.org/abs/2111.10309v1
- Date: Fri, 19 Nov 2021 16:44:33 GMT
- Title: Unsupervised Visual Time-Series Representation Learning and Clustering
- Authors: Gaurangi Anand and Richi Nayak
- Abstract summary: Time-series data is generated ubiquitously from Internet-of-Things infrastructure, connected and wearable devices, remote sensing, autonomous driving research and, audio-video communications.
This paper investigates the potential of unsupervised representation learning for these time-series.
- Score: 2.610470075814367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time-series data is generated ubiquitously from Internet-of-Things (IoT)
infrastructure, connected and wearable devices, remote sensing, autonomous
driving research and, audio-video communications, in enormous volumes. This
paper investigates the potential of unsupervised representation learning for
these time-series. In this paper, we use a novel data transformation along with
novel unsupervised learning regime to transfer the learning from other domains
to time-series where the former have extensive models heavily trained on very
large labelled datasets. We conduct extensive experiments to demonstrate the
potential of the proposed approach through time-series clustering.
Related papers
- Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - On the Resurgence of Recurrent Models for Long Sequences -- Survey and
Research Opportunities in the Transformer Era [59.279784235147254]
This survey is aimed at providing an overview of these trends framed under the unifying umbrella of Recurrence.
It emphasizes novel research opportunities that become prominent when abandoning the idea of processing long sequences.
arXiv Detail & Related papers (2024-02-12T23:55:55Z) - Universal Time-Series Representation Learning: A Survey [14.340399848964662]
Time-series data exists in every corner of real-world systems and services.
Deep learning has demonstrated remarkable performance in extracting hidden patterns and features from time-series data.
arXiv Detail & Related papers (2024-01-08T08:00:04Z) - Multi-Granularity Framework for Unsupervised Representation Learning of
Time Series [1.003058966910087]
This paper proposes an unsupervised framework to realize multi-granularity representation learning for time series.
Specifically, we employed a cross-granularity transformer to develop an association between fine- and coarse-grained representations.
In addition, we introduced a retrieval task as an unsupervised training task to learn the multi-granularity representation of time series.
arXiv Detail & Related papers (2023-12-12T13:25:32Z) - Unsupervised Representation Learning for Time Series: A Review [20.00853543048447]
Unsupervised representation learning approaches aim to learn discriminative feature representations from unlabeled data, without the requirement of annotating every sample.
We conduct a literature review of existing rapidly evolving unsupervised representation learning approaches for time series.
We empirically evaluate state-of-the-art approaches, especially the rapidly evolving contrastive learning methods, on 9 diverse real-world datasets.
arXiv Detail & Related papers (2023-08-03T07:28:06Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - PIETS: Parallelised Irregularity Encoders for Forecasting with
Heterogeneous Time-Series [5.911865723926626]
Heterogeneity and irregularity of multi-source data sets present a significant challenge to time-series analysis.
In this work, we design a novel architecture, PIETS, to model heterogeneous time-series.
We show that PIETS is able to effectively model heterogeneous temporal data and outperforms other state-of-the-art approaches in the prediction task.
arXiv Detail & Related papers (2021-09-30T20:01:19Z) - Multimodal Clustering Networks for Self-supervised Learning from
Unlabeled Videos [69.61522804742427]
This paper proposes a self-supervised training framework that learns a common multimodal embedding space.
We extend the concept of instance-level contrastive learning with a multimodal clustering step to capture semantic similarities across modalities.
The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains.
arXiv Detail & Related papers (2021-04-26T15:55:01Z) - Interpretable Deep Representation Learning from Temporal Multi-view Data [4.2179426073904995]
We propose a generative model based on variational autoencoder and a recurrent neural network to infer the latent dynamics for multi-view temporal data.
We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model.
arXiv Detail & Related papers (2020-05-11T15:59:06Z) - Laplacian Denoising Autoencoder [114.21219514831343]
We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
arXiv Detail & Related papers (2020-03-30T16:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.