Towards Unifying Diffusion Models for Probabilistic Spatio-Temporal
Graph Learning
- URL: http://arxiv.org/abs/2310.17360v1
- Date: Thu, 26 Oct 2023 12:48:43 GMT
- Title: Towards Unifying Diffusion Models for Probabilistic Spatio-Temporal
Graph Learning
- Authors: Junfeng Hu, Xu Liu, Zhencheng Fan, Yuxuan Liang, Roger Zimmermann
- Abstract summary: Existing approaches tackle different learning tasks independently, tailoring their models to intrinsic uncertainties in the Web-temporal data.
We introduce Unified S-temporal Diffusion Models (USTD) to address the tasks uniformly within the uncertainty-aware diffusion framework.
USTD is designed comprising a shared-temporal encoder and attention-based denoising that networks are task-specific.
- Score: 28.50648620744963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spatio-temporal graph learning is a fundamental problem in the Web of Things
era, which enables a plethora of Web applications such as smart cities, human
mobility and climate analysis. Existing approaches tackle different learning
tasks independently, tailoring their models to unique task characteristics.
These methods, however, fall short of modeling intrinsic uncertainties in the
spatio-temporal data. Meanwhile, their specialized designs limit their
universality as general spatio-temporal learning solutions. In this paper, we
propose to model the learning tasks in a unified perspective, viewing them as
predictions based on conditional information with shared spatio-temporal
patterns. Based on this proposal, we introduce Unified Spatio-Temporal
Diffusion Models (USTD) to address the tasks uniformly within the
uncertainty-aware diffusion framework. USTD is holistically designed,
comprising a shared spatio-temporal encoder and attention-based denoising
networks that are task-specific. The shared encoder, optimized by a
pre-training strategy, effectively captures conditional spatio-temporal
patterns. The denoising networks, utilizing both cross- and self-attention,
integrate conditional dependencies and generate predictions. Opting for
forecasting and kriging as downstream tasks, we design Gated Attention (SGA)
and Temporal Gated Attention (TGA) for each task, with different emphases on
the spatial and temporal dimensions, respectively. By combining the advantages
of deterministic encoders and probabilistic diffusion models, USTD achieves
state-of-the-art performances compared to deterministic and probabilistic
baselines in both tasks, while also providing valuable uncertainty estimates.
Related papers
- A Survey on Diffusion Models for Time Series and Spatio-Temporal Data [92.1255811066468]
We review the use of diffusion models in time series and S-temporal data, categorizing them by model, task type, data modality, and practical application domain.
We categorize diffusion models into unconditioned and conditioned types discuss time series and S-temporal data separately.
Our survey covers their application extensively in various fields including healthcare, recommendation, climate, energy, audio, and transportation.
arXiv Detail & Related papers (2024-04-29T17:19:40Z) - Spatial-temporal Memories Enhanced Graph Autoencoder for Anomaly Detection in Dynamic Graphs [52.956235109354175]
Anomaly detection in dynamic graphs presents a significant challenge due to the temporal evolution of graph structures and attributes.
We introduce a novel Spatial-Temporal memories-enhanced graph autoencoder (STRIPE)
STRIPE has demonstrated a superior capability to discern anomalies by effectively leveraging the distinct spatial and temporal dynamics of dynamic graphs.
arXiv Detail & Related papers (2024-03-14T02:26:10Z) - UniST: A Prompt-Empowered Universal Model for Urban Spatio-Temporal Prediction [26.69233687863233]
Urban-temporal prediction is crucial for informed decision-making, such as traffic management, resource optimization, emergence response.
We introduce UniST, a universal model designed for general urban-temporal prediction across wide range of scenarios by large language models.
arXiv Detail & Related papers (2024-02-19T05:04:11Z) - A Temporally Disentangled Contrastive Diffusion Model for Spatiotemporal Imputation [35.46631415365955]
We introduce a conditional diffusion framework called C$2$TSD, which incorporates disentangled temporal (trend and seasonality) representations as conditional information.
Our experiments on three real-world datasets demonstrate the superior performance of our approach compared to a number of state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-18T11:59:04Z) - GATGPT: A Pre-trained Large Language Model with Graph Attention Network
for Spatiotemporal Imputation [19.371155159744934]
In real-world settings, such data often contain missing elements due to issues like sensor malfunctions and data transmission errors.
The objective oftemporal imputation is to estimate these missing values by understanding the inherent spatial and temporal relationships in the observed time series.
Traditionally, intricatetemporal imputation has relied on specific architectures, which suffer from limited applicability and high computational complexity.
In contrast our approach integrates pre-trained large language models (LLMs) into intricatetemporal imputation, introducing a groundbreaking framework, GATGPT.
arXiv Detail & Related papers (2023-11-24T08:15:11Z) - GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks [24.323017830938394]
This work aims to address challenges by introducing a pre-training framework that seamlessly integrates with baselines and enhances their performance.
The framework is built upon two key designs: (i) We propose a.
apple-to-apple mask autoencoder as a pre-training model for learning-temporal dependencies.
These modules are specifically designed to capture intra-temporal customized representations and semantic- and inter-cluster relationships.
arXiv Detail & Related papers (2023-11-07T02:36:24Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Taming Local Effects in Graph-based Spatiotemporal Forecasting [28.30604130617646]
Stemporal graph neural networks have shown to be effective in time series forecasting applications.
This paper aims to understand the interplay between globality and locality in graph-basedtemporal forecasting.
We propose a methodological framework to rationalize the practice of including trainable node embeddings in such architectures.
arXiv Detail & Related papers (2023-02-08T14:18:56Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [56.22339016797785]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs [65.18780403244178]
We propose a continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE)
Specifically, we first abstract multivariate time series into dynamic graphs with time-evolving node features and unknown graph structures.
Then, we design and solve a neural ODE to complement missing graph topologies and unify both spatial and temporal message passing.
arXiv Detail & Related papers (2022-02-17T02:17:31Z) - Predicting Temporal Sets with Deep Neural Networks [50.53727580527024]
We propose an integrated solution based on the deep neural networks for temporal sets prediction.
A unique perspective is to learn element relationship by constructing set-level co-occurrence graph.
We design an attention-based module to adaptively learn the temporal dependency of elements and sets.
arXiv Detail & Related papers (2020-06-20T03:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.