TiVy: Time Series Visual Summary for Scalable Visualization
- URL: http://arxiv.org/abs/2507.18972v2
- Date: Mon, 28 Jul 2025 23:00:54 GMT
- Title: TiVy: Time Series Visual Summary for Scalable Visualization
- Authors: Gromit Yeuk-Yin Chan, Luis Gustavo Nonato, Themis Palpanas, Cláudio T. Silva, Juliana Freire,
- Abstract summary: We propose TiVy, a new algorithm that summarizes time series using sequential patterns.<n>We also present an interactive time series visualization that renders large-scale time series in real-time.
- Score: 32.33793043326047
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Visualizing multiple time series presents fundamental tradeoffs between scalability and visual clarity. Time series capture the behavior of many large-scale real-world processes, from stock market trends to urban activities. Users often gain insights by visualizing them as line charts, juxtaposing or superposing multiple time series to compare them and identify trends and patterns. However, existing representations struggle with scalability: when covering long time spans, leading to visual clutter from too many small multiples or overlapping lines. We propose TiVy, a new algorithm that summarizes time series using sequential patterns. It transforms the series into a set of symbolic sequences based on subsequence visual similarity using Dynamic Time Warping (DTW), then constructs a disjoint grouping of similar subsequences based on the frequent sequential patterns. The grouping result, a visual summary of time series, provides uncluttered superposition with fewer small multiples. Unlike common clustering techniques, TiVy extracts similar subsequences (of varying lengths) aligned in time. We also present an interactive time series visualization that renders large-scale time series in real-time. Our experimental evaluation shows that our algorithm (1) extracts clear and accurate patterns when visualizing time series data, (2) achieves a significant speed-up (1000X) compared to a straightforward DTW clustering. We also demonstrate the efficiency of our approach to explore hidden structures in massive time series data in two usage scenarios.
Related papers
- Time Series Representations for Classification Lie Hidden in Pretrained Vision Transformers [49.07665715422702]
We propose Time Vision Transformer (TiViT), a framework that converts time series into images.<n>We show that TiViT achieves state-of-the-art performance on standard time series classification benchmarks.<n>Our findings reveal a new direction for reusing vision representations in a non-visual domain.
arXiv Detail & Related papers (2025-06-10T09:54:51Z) - $k$-Graph: A Graph Embedding for Interpretable Time Series Clustering [21.763409747687348]
$k$-Graph is an unsupervised method crafted to augment interpretability in time series clustering.<n>Our experimental results reveal that $k$-Graph outperforms current state-of-the-art time series clustering algorithms in accuracy.
arXiv Detail & Related papers (2025-02-18T16:59:51Z) - Language in the Flow of Time: Time-Series-Paired Texts Weaved into a Unified Temporal Narrative [65.84249211767921]
Texts as Time Series (TaTS) can be plugged into any existing numerical-only time series models.<n>We show that TaTS can enhance predictive performance without modifying model architectures.
arXiv Detail & Related papers (2025-02-13T03:43:27Z) - Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers [55.475142494272724]
Time series prediction is crucial for understanding and forecasting complex dynamics in various domains.
We introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions.
The model consistently delivers state-of-the-art performance across various real-world datasets.
arXiv Detail & Related papers (2024-05-22T16:41:21Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Series2Vec: Similarity-based Self-supervised Representation Learning for
Time Series Classification [13.775977945756415]
We introduce a novel approach called textitSeries2Vec for self-supervised representation learning.
Series2Vec is trained to predict the similarity between two series in both temporal and spectral domains.
We show that Series2Vec performs comparably with fully supervised training and offers high efficiency in datasets with limited-labeled data.
arXiv Detail & Related papers (2023-12-07T02:30:40Z) - Time-to-Pattern: Information-Theoretic Unsupervised Learning for
Scalable Time Series Summarization [7.294418916091012]
We introduce an approach to time series summarization called Time-to-Pattern (T2P)
T2P aims to find a set of diverse patterns that together encode the most salient information, following the notion of minimum description length.
Our synthetic and real-world experiments reveal that T2P discovers informative patterns, even in noisy and complex settings.
arXiv Detail & Related papers (2023-08-26T01:15:32Z) - Expressing Multivariate Time Series as Graphs with Time Series Attention
Transformer [14.172091921813065]
We propose the Time Series Attention Transformer (TSAT) for multivariate time series representation learning.
Using TSAT, we represent both temporal information and inter-dependencies of time series in terms of edge-enhanced dynamic graphs.
We show that TSAT clearly outerperforms six state-of-the-art baseline methods in various forecasting horizons.
arXiv Detail & Related papers (2022-08-19T12:25:56Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Novel Features for Time Series Analysis: A Complex Networks Approach [62.997667081978825]
Time series data are ubiquitous in several domains as climate, economics and health care.
Recent conceptual approach relies on time series mapping to complex networks.
Network analysis can be used to characterize different types of time series.
arXiv Detail & Related papers (2021-10-11T13:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.