How Can Graph Neural Networks Help Document Retrieval: A Case Study on
CORD19 with Concept Map Generation
- URL: http://arxiv.org/abs/2201.04672v1
- Date: Wed, 12 Jan 2022 19:52:29 GMT
- Title: How Can Graph Neural Networks Help Document Retrieval: A Case Study on
CORD19 with Concept Map Generation
- Authors: Hejie Cui, Jiaying Lu, Yao Ge, Carl Yang
- Abstract summary: Graph neural networks (GNNs) are powerful tools for representation learning on irregular data.
With unstructured texts represented as concept maps, GNNs can be exploited for tasks like document retrieval.
We conduct an empirical study on a large-scale multi-discipline dataset CORD-19.
Results show that our proposed semantics-oriented graph functions achieve better and more stable performance based on the BM25 retrieved candidates.
- Score: 14.722791874800617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs), as a group of powerful tools for representation
learning on irregular data, have manifested superiority in various downstream
tasks. With unstructured texts represented as concept maps, GNNs can be
exploited for tasks like document retrieval. Intrigued by how can GNNs help
document retrieval, we conduct an empirical study on a large-scale
multi-discipline dataset CORD-19. Results show that instead of the complex
structure-oriented GNNs such as GINs and GATs, our proposed semantics-oriented
graph functions achieve better and more stable performance based on the BM25
retrieved candidates. Our insights in this case study can serve as a guideline
for future work to develop effective GNNs with appropriate semantics-oriented
inductive biases for textual reasoning tasks like document retrieval and
classification. All code for this case study is available at
https://github.com/HennyJie/GNN-DocRetrieval.
Related papers
- Graph Classification with GNNs: Optimisation, Representation and Inductive Bias [0.6445605125467572]
We argue that such equivalence ignores the accompanying optimization issues and does not provide a holistic view of the GNN learning process.
We prove theoretically that the message-passing layers in the graph, have a tendency to search for either discriminative subgraphs, or a collection of discriminative nodes dispersed across the graph.
arXiv Detail & Related papers (2024-08-17T18:15:44Z) - Graph Neural Re-Ranking via Corpus Graph [12.309841763251406]
Graph Neural Re-Ranking (GNRR) is a pipeline based on Graph Neural Networks (GNNs) that enables each query to consider documents distribution during inference.
We demonstrate that GNNs effectively capture cross-document interactions, improving performance on popular ranking metrics.
arXiv Detail & Related papers (2024-06-17T16:38:19Z) - MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - An Empirical Study of Retrieval-enhanced Graph Neural Networks [48.99347386689936]
Graph Neural Networks (GNNs) are effective tools for graph representation learning.
We propose a retrieval-enhanced scheme called GRAPHRETRIEVAL, which is agnostic to the choice of graph neural network models.
We conduct comprehensive experiments over 13 datasets, and we observe that GRAPHRETRIEVAL is able to reach substantial improvements over existing GNNs.
arXiv Detail & Related papers (2022-06-01T09:59:09Z) - Training Free Graph Neural Networks for Graph Matching [103.45755859119035]
TFGM is a framework to boost the performance of Graph Neural Networks (GNNs) based graph matching without training.
Applying TFGM on various GNNs shows promising improvements over baselines.
arXiv Detail & Related papers (2022-01-14T09:04:46Z) - Measuring and Sampling: A Metric-guided Subgraph Learning Framework for
Graph Neural Network [11.017348743924426]
We propose a Metric-Guided (MeGuide) subgraph learning framework for Graph neural network (GNN)
MeGuide employs two novel metrics: Feature Smoothness and Connection Failure Distance to guide the subgraph sampling and mini-batch based training.
We demonstrate the effectiveness and efficiency of MeGuide in training various GNNs on multiple datasets.
arXiv Detail & Related papers (2021-12-30T11:00:00Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.