Intensity Profile Projection: A Framework for Continuous-Time
Representation Learning for Dynamic Networks
- URL: http://arxiv.org/abs/2306.06155v3
- Date: Wed, 17 Jan 2024 17:13:59 GMT
- Title: Intensity Profile Projection: A Framework for Continuous-Time
Representation Learning for Dynamic Networks
- Authors: Alexander Modell, Ian Gallagher, Emma Ceccherini, Nick Whiteley and
Patrick Rubin-Delanchy
- Abstract summary: We present a representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
The framework consists of three stages: estimating pairwise intensity functions, learning a projection which minimises a notion of intensity reconstruction error.
Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses.
- Score: 50.2033914945157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new representation learning framework, Intensity Profile
Projection, for continuous-time dynamic network data. Given triples $(i,j,t)$,
each representing a time-stamped ($t$) interaction between two entities
($i,j$), our procedure returns a continuous-time trajectory for each node,
representing its behaviour over time. The framework consists of three stages:
estimating pairwise intensity functions, e.g. via kernel smoothing; learning a
projection which minimises a notion of intensity reconstruction error; and
constructing evolving node representations via the learned projection. The
trajectories satisfy two properties, known as structural and temporal
coherence, which we see as fundamental for reliable inference. Moreoever, we
develop estimation theory providing tight control on the error of any estimated
trajectory, indicating that the representations could even be used in quite
noise-sensitive follow-on analyses. The theory also elucidates the role of
smoothing as a bias-variance trade-off, and shows how we can reduce the level
of smoothing as the signal-to-noise ratio increases on account of the algorithm
`borrowing strength' across the network.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - CTRL: Continuous-Time Representation Learning on Temporal Heterogeneous Information Network [32.42051167404171]
We propose a Continuous-Time Representation Learning model on temporal HINs.
We train the model with a future event (a subgraph) prediction task to capture the evolution of the high-order network structure.
The results demonstrate that our model significantly boosts performance and outperforms various state-of-the-art approaches.
arXiv Detail & Related papers (2024-05-11T03:39:22Z) - Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - Self-Supervised Temporal Graph learning with Temporal and Structural Intensity Alignment [53.72873672076391]
Temporal graph learning aims to generate high-quality representations for graph-based tasks with dynamic information.
We propose a self-supervised method called S2T for temporal graph learning, which extracts both temporal and structural information.
S2T achieves at most 10.13% performance improvement compared with the state-of-the-art competitors on several datasets.
arXiv Detail & Related papers (2023-02-15T06:36:04Z) - DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph [59.583555454424]
We propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed.
We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively.
arXiv Detail & Related papers (2022-10-19T14:34:12Z) - Representation Learning via Global Temporal Alignment and
Cycle-Consistency [20.715813546383178]
We introduce a weakly supervised method for representation learning based on aligning temporal sequences.
We report significant performance increases over previous methods.
In addition, we report two applications of our temporal alignment framework, namely 3D pose reconstruction and fine-grained audio/visual retrieval.
arXiv Detail & Related papers (2021-05-11T17:34:04Z) - Multiple Object Tracking with Correlation Learning [16.959379957515974]
We propose to exploit the local correlation module to model the topological relationship between targets and their surrounding environment.
Specifically, we establish dense correspondences of each spatial location and its context, and explicitly constrain the correlation volumes through self-supervised learning.
Our approach demonstrates the effectiveness of correlation learning with the superior performance and obtains state-of-the-art MOTA of 76.5% and IDF1 of 73.6% on MOT17.
arXiv Detail & Related papers (2021-04-08T06:48:02Z) - Consistency Guided Scene Flow Estimation [159.24395181068218]
CGSF is a self-supervised framework for the joint reconstruction of 3D scene structure and motion from stereo video.
We show that the proposed model can reliably predict disparity and scene flow in challenging imagery.
It achieves better generalization than the state-of-the-art, and adapts quickly and robustly to unseen domains.
arXiv Detail & Related papers (2020-06-19T17:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.