Dynamic Virtual Graph Significance Networks for Predicting Influenza
- URL: http://arxiv.org/abs/2102.08122v1
- Date: Tue, 16 Feb 2021 12:38:23 GMT
- Title: Dynamic Virtual Graph Significance Networks for Predicting Influenza
- Authors: Jie Zhang, Pengfei Zhou, Hongyan Wu
- Abstract summary: We develop a novel method, Dynamic Virtual Graph Significance Networks (DVGSN), which can dynamically learn from similar "infection situations" in historical timepoints.
Experiments on real-world influenza data demonstrate that DVGSN significantly outperforms the current state-of-the-art methods.
- Score: 6.144775057306887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph-structured data and their related algorithms have attracted significant
attention in many fields, such as influenza prediction in public health.
However, the variable influenza seasonality, occasional pandemics, and domain
knowledge pose great challenges to construct an appropriate graph, which could
impair the strength of the current popular graph-based algorithms to perform
data analysis. In this study, we develop a novel method, Dynamic Virtual Graph
Significance Networks (DVGSN), which can supervisedly and dynamically learn
from similar "infection situations" in historical timepoints. Representation
learning on the dynamic virtual graph can tackle the varied seasonality and
pandemics, and therefore improve the performance. The extensive experiments on
real-world influenza data demonstrate that DVGSN significantly outperforms the
current state-of-the-art methods. To the best of our knowledge, this is the
first attempt to supervisedly learn a dynamic virtual graph for time-series
prediction tasks. Moreover, the proposed method needs less domain knowledge to
build a graph in advance and has rich interpretability, which makes the method
more acceptable in the fields of public health, life sciences, and so on.
Related papers
- Graph Learning [16.916717864896007]
Graph learning has rapidly evolved into a critical subfield of machine learning and artificial intelligence (AI)<n>This survey focuses on key dimensions including scalable, temporal, multimodal, generative, explainable, and responsible graph learning.<n>We also explore ethical considerations, such as privacy and fairness, to ensure responsible deployment of graph learning models.
arXiv Detail & Related papers (2025-07-08T03:29:27Z) - AugWard: Augmentation-Aware Representation Learning for Accurate Graph Classification [16.7104207718009]
AugWard is a graph representation learning framework that considers the diversity introduced by graph augmentation.
AugWard applies augmentation-aware training to predict the graph distance between the augmented graph and its original one.
Results show that AugWard gives the state-of-the-art performance in supervised, semi-supervised graph classification, and transfer learning.
arXiv Detail & Related papers (2025-03-27T02:58:28Z) - A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - Informative Subgraphs Aware Masked Auto-Encoder in Dynamic Graphs [1.3571543090749625]
We introduce a constrained probabilistic generative model to generate informative subgraphs that guide the evolution of dynamic graphs.
The informative subgraph identified by DyGIS will serve as the input of dynamic graph masked autoencoder (DGMAE)
arXiv Detail & Related papers (2024-09-14T02:16:00Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Deep learning for dynamic graphs: models and benchmarks [16.851689741256912]
Recent progress in research on Deep Graph Networks (DGNs) has led to a maturation of the domain of learning on graphs.
Despite the growth of this research field, there are still important challenges that are yet unsolved.
arXiv Detail & Related papers (2023-07-12T12:02:36Z) - EasyDGL: Encode, Train and Interpret for Continuous-time Dynamic Graph Learning [92.71579608528907]
This paper aims to design an easy-to-use pipeline (termed as EasyDGL) composed of three key modules with both strong ability fitting and interpretability.
EasyDGL can effectively quantify the predictive power of frequency content that a model learn from the evolving graph data.
arXiv Detail & Related papers (2023-03-22T06:35:08Z) - Time-aware Random Walk Diffusion to Improve Dynamic Graph Learning [3.4012007729454816]
TiaRa is a novel diffusion-based method for augmenting a dynamic graph represented as a discrete-time sequence of graph snapshots.
We show that TiaRa effectively augments a given dynamic graph, and leads to significant improvements in dynamic GNN models for various graph datasets and tasks.
arXiv Detail & Related papers (2022-11-02T15:55:46Z) - Graph Lifelong Learning: A Survey [6.545297572977323]
This paper focuses on the motivations, potentials, state-of-the-art approaches, and open issues of graph lifelong learning.
We expect extensive research and development interest in this emerging field.
arXiv Detail & Related papers (2022-02-22T06:14:07Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z) - Transfer Graph Neural Networks for Pandemic Forecasting [32.0506180195988]
We study the impact of population movement on the spread of COVID-19.
We employ graph neural networks to predict the number of future cases.
arXiv Detail & Related papers (2020-09-10T13:23:52Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.