Latent Structural Similarity Networks for Unsupervised Discovery in Multivariate Time Series
- URL: http://arxiv.org/abs/2601.18803v1
- Date: Thu, 15 Jan 2026 03:05:17 GMT
- Title: Latent Structural Similarity Networks for Unsupervised Discovery in Multivariate Time Series
- Authors: Olusegun Owoeye,
- Abstract summary: Method learns window-level sequence representations using an unsupervised sequence-to-sequence autoencoder.<n>It induces a sparse similarity network by thresholding a latent-space similarity measure.<n>This network is intended as an analyzable abstraction that compresses the pairwise search space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a task-agnostic discovery layer for multivariate time series that constructs a relational hypothesis graph over entities without assuming linearity, stationarity, or a downstream objective. The method learns window-level sequence representations using an unsupervised sequence-to-sequence autoencoder, aggregates these representations into entity-level embeddings, and induces a sparse similarity network by thresholding a latent-space similarity measure. This network is intended as an analyzable abstraction that compresses the pairwise search space and exposes candidate relationships for further investigation, rather than as a model optimized for prediction, trading, or any decision rule. The framework is demonstrated on a challenging real-world dataset of hourly cryptocurrency returns, illustrating how latent similarity induces coherent network structure; a classical econometric relation is also reported as an external diagnostic lens to contextualize discovered edges.
Related papers
- On the Exact Algorithmic Extraction of Finite Tesselations Through Prime Extraction of Minimal Representative Forms [0.764671395172401]
This paper employs a hierarchical algorithm that discovers exact tessellations in finite planar grids.<n>We evaluate scalability on grid sizes from 2x2 to 32x32, showing overlap detection on simple repeating tiles exhibits processing time under 1ms.<n>This algorithm provides deterministic behavior for exact, axis-aligned, rectangular tessellations.
arXiv Detail & Related papers (2026-03-01T04:20:06Z) - Adjacency Spectral Embeddings of Correlation Networks [2.538209532048867]
In many applications, weighted networks are constructed based on time series data.<n>We show that when time series are expressible in terms of a small number of Fourier basis elements, correlation networks correspond to latent space networks with dependent edge noise.
arXiv Detail & Related papers (2026-02-24T16:13:33Z) - Topology Identification and Inference over Graphs [61.06365536861156]
Topology identification and inference of processes evolving over graphs arise in timely applications involving brain, transportation, financial, power, as well as social and information networks.<n>This chapter provides an overview of graph topology identification and statistical inference methods for multidimensional data.
arXiv Detail & Related papers (2025-12-11T00:47:09Z) - Graph Neural Network and Transformer Integration for Unsupervised System Anomaly Discovery [14.982273490507986]
This study proposes an unsupervised anomaly detection method for distributed backend service systems.<n>It addresses practical challenges such as complex structural dependencies, diverse behavioral evolution, and the absence of labeled data.<n>Results show that the proposed method outperforms existing models on several key metrics.
arXiv Detail & Related papers (2025-08-13T00:35:58Z) - Learning Representations of Event Time Series with Sparse Autoencoders for Anomaly Detection, Similarity Search, and Unsupervised Classification [0.005439329219803859]
Event time series are sequences of discrete events occurring at irregular time intervals.<n>They are common in domains such as high-energy astrophysics, computational social science, cybersecurity, finance, healthcare, neuroscience, and seismology.<n>We propose novel two- and three-dimensional tensor representations for event time series, coupled with sparse autoencoders that learn physically meaningful latent representations.
arXiv Detail & Related papers (2025-07-15T18:01:03Z) - Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models [61.10851158749843]
Key insights can be obtained by discovering lead-lag relationships inherent in the data.
We develop a clustering-driven methodology for robust detection of lead-lag relationships in lagged multi-factor models.
arXiv Detail & Related papers (2023-05-11T10:30:35Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - NOTMAD: Estimating Bayesian Networks with Sample-Specific Structures and
Parameters [70.55488722439239]
We present NOTMAD, which learns to mix archetypal networks according to sample context.
We demonstrate the utility of NOTMAD and sample-specific network inference through analysis and experiments, including patient-specific gene expression networks.
arXiv Detail & Related papers (2021-11-01T17:17:34Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Layer-stacked Attention for Heterogeneous Network Embedding [0.0]
Layer-stacked ATTention Embedding (LATTE) is an architecture that automatically decomposes higher-order meta relations at each layer.
LATTE offers a more interpretable aggregation scheme for nodes of different types at different neighborhood ranges.
In both transductive and inductive node classification tasks, LATTE can achieve state-of-the-art performance compared to existing approaches.
arXiv Detail & Related papers (2020-09-17T05:13:41Z) - Moment-Matching Graph-Networks for Causal Inference [0.0]
This note explores a fully unsupervised deep-learning framework for simulating non-linear structural equation models from observational training data.
The main contribution of this note is an architecture for applying moment-matching loss functions to the edges of a causal Bayesian graph, resulting in a generative conditional-moment-matching graph-neural-network.
arXiv Detail & Related papers (2020-07-20T22:07:43Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.