Dynamic Virtual Graph Significance Networks for Predicting Influenza
- URL: http://arxiv.org/abs/2102.08122v1
- Date: Tue, 16 Feb 2021 12:38:23 GMT
- Title: Dynamic Virtual Graph Significance Networks for Predicting Influenza
- Authors: Jie Zhang, Pengfei Zhou, Hongyan Wu
- Abstract summary: We develop a novel method, Dynamic Virtual Graph Significance Networks (DVGSN), which can dynamically learn from similar "infection situations" in historical timepoints.
Experiments on real-world influenza data demonstrate that DVGSN significantly outperforms the current state-of-the-art methods.
- Score: 6.144775057306887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph-structured data and their related algorithms have attracted significant
attention in many fields, such as influenza prediction in public health.
However, the variable influenza seasonality, occasional pandemics, and domain
knowledge pose great challenges to construct an appropriate graph, which could
impair the strength of the current popular graph-based algorithms to perform
data analysis. In this study, we develop a novel method, Dynamic Virtual Graph
Significance Networks (DVGSN), which can supervisedly and dynamically learn
from similar "infection situations" in historical timepoints. Representation
learning on the dynamic virtual graph can tackle the varied seasonality and
pandemics, and therefore improve the performance. The extensive experiments on
real-world influenza data demonstrate that DVGSN significantly outperforms the
current state-of-the-art methods. To the best of our knowledge, this is the
first attempt to supervisedly learn a dynamic virtual graph for time-series
prediction tasks. Moreover, the proposed method needs less domain knowledge to
build a graph in advance and has rich interpretability, which makes the method
more acceptable in the fields of public health, life sciences, and so on.
Related papers
- Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Multimodal Graph Learning for Modeling Emerging Pandemics with Big Data [3.4512624130325786]
We propose a novel framework called MGL4MEP that integrates temporal graph neural networks and multi-modal data for learning and forecasting.
We incorporate big data sources, including social media content, by utilizing specific pre-trained language models.
This integration provides rich indicators of pandemic dynamics through learning with temporal graph neural networks.
arXiv Detail & Related papers (2023-10-23T04:05:19Z) - Deep learning for dynamic graphs: models and benchmarks [16.851689741256912]
Recent progress in research on Deep Graph Networks (DGNs) has led to a maturation of the domain of learning on graphs.
Despite the growth of this research field, there are still important challenges that are yet unsolved.
arXiv Detail & Related papers (2023-07-12T12:02:36Z) - EasyDGL: Encode, Train and Interpret for Continuous-time Dynamic Graph
Learning [114.72818205974285]
This paper aims to design an easy-to-use pipeline (termed as EasyDGL) composed of three key modules with both strong ability fitting and interpretability.
EasyDGL can effectively quantify the predictive power of frequency content that a model learn from the evolving graph data.
arXiv Detail & Related papers (2023-03-22T06:35:08Z) - Time-aware Random Walk Diffusion to Improve Dynamic Graph Learning [3.4012007729454816]
TiaRa is a novel diffusion-based method for augmenting a dynamic graph represented as a discrete-time sequence of graph snapshots.
We show that TiaRa effectively augments a given dynamic graph, and leads to significant improvements in dynamic GNN models for various graph datasets and tasks.
arXiv Detail & Related papers (2022-11-02T15:55:46Z) - Graph Lifelong Learning: A Survey [6.545297572977323]
This paper focuses on the motivations, potentials, state-of-the-art approaches, and open issues of graph lifelong learning.
We expect extensive research and development interest in this emerging field.
arXiv Detail & Related papers (2022-02-22T06:14:07Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z) - Transfer Graph Neural Networks for Pandemic Forecasting [32.0506180195988]
We study the impact of population movement on the spread of COVID-19.
We employ graph neural networks to predict the number of future cases.
arXiv Detail & Related papers (2020-09-10T13:23:52Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.