A Heterogeneous Dynamical Graph Neural Networks Approach to Quantify
Scientific Impact
- URL: http://arxiv.org/abs/2003.12042v1
- Date: Thu, 26 Mar 2020 17:15:36 GMT
- Title: A Heterogeneous Dynamical Graph Neural Networks Approach to Quantify
Scientific Impact
- Authors: Fan Zhou, Xovee Xu, Ce Li, Goce Trajcevski, Ting Zhong, Kunpeng Zhang
- Abstract summary: We propose an approach based on Heterogeneous Dynamical Graph Neural Network (HDGNN) to explicitly model and predict the cumulative impact of papers and authors.
Experiments conducted on a real citation dataset demonstrate its superior performance of predicting the impact of both papers and authors.
- Score: 39.9627229543809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantifying and predicting the long-term impact of scientific writings or
individual scholars has important implications for many policy decisions, such
as funding proposal evaluation and identifying emerging research fields. In
this work, we propose an approach based on Heterogeneous Dynamical Graph Neural
Network (HDGNN) to explicitly model and predict the cumulative impact of papers
and authors. HDGNN extends heterogeneous GNNs by incorporating temporally
evolving characteristics and capturing both structural properties of attributed
graph and the growing sequence of citation behavior. HDGNN is significantly
different from previous models in its capability of modeling the node impact in
a dynamic manner while taking into account the complex relations among nodes.
Experiments conducted on a real citation dataset demonstrate its superior
performance of predicting the impact of both papers and authors.
Related papers
- Towards Causal Classification: A Comprehensive Study on Graph Neural
Networks [9.360596957822471]
Graph Neural Networks (GNNs) for processing graph-structured data have expanded their potential for causal analysis.
Our study delves into nine benchmark graph classification models, testing their strength and versatility across seven datasets.
Our findings are instrumental in furthering the understanding and practical application of GNNs in diverse datacentric fields.
arXiv Detail & Related papers (2024-01-27T15:35:05Z) - Rethinking Causal Relationships Learning in Graph Neural Networks [24.7962807148905]
We introduce a lightweight and adaptable GNN module designed to strengthen GNNs' causal learning capabilities.
We empirically validate the effectiveness of the proposed module.
arXiv Detail & Related papers (2023-12-15T08:54:32Z) - A Metadata-Driven Approach to Understand Graph Neural Networks [17.240017543449735]
We propose a $textitmetadata-driven$ approach to analyze the sensitivity of GNNs to graph data properties.
Our theoretical findings reveal that datasets with more balanced degree distribution exhibit better linear separability of node representations.
arXiv Detail & Related papers (2023-10-30T04:25:02Z) - DyExplainer: Explainable Dynamic Graph Neural Networks [37.16783248212211]
We present DyExplainer, a novel approach to explaining dynamic Graph Neural Networks (GNNs) on the fly.
DyExplainer trains a dynamic GNN backbone to extract representations of the graph at each snapshot.
We also augment our approach with contrastive learning techniques to provide priori-guided regularization.
arXiv Detail & Related papers (2023-10-25T05:26:33Z) - Modeling Dynamic Heterogeneous Graph and Node Importance for Future
Citation Prediction [26.391252682418607]
We propose a Dynamic heterogeneous Graph and Node Importance network (DGNI) learning framework to predict future citation trends of newly published papers.
First, a dynamic heterogeneous network embedding module is provided to capture the dynamic evolutionary trends of the whole academic network.
A node importance embedding module is proposed to capture the global consistency relationship to figure out each paper's node importance.
arXiv Detail & Related papers (2023-05-27T08:53:26Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Graph neural networks for the prediction of molecular structure-property
relationships [59.11160990637615]
Graph neural networks (GNNs) are a novel machine learning method that directly work on the molecular graph.
GNNs allow to learn properties in an end-to-end fashion, thereby avoiding the need for informative descriptors.
We describe the fundamentals of GNNs and demonstrate the application of GNNs via two examples for molecular property prediction.
arXiv Detail & Related papers (2022-07-25T11:30:44Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.