Is Graph Structure Necessary for Multi-hop Question Answering?
- URL: http://arxiv.org/abs/2004.03096v2
- Date: Thu, 29 Oct 2020 09:29:19 GMT
- Title: Is Graph Structure Necessary for Multi-hop Question Answering?
- Authors: Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
- Abstract summary: We investigate whether the graph structure is necessary for multi-hop question answering.
Experiments and visualized analysis demonstrate that graph-attention or the entire graph structure can be replaced by self-attention or Transformers.
- Score: 34.189355591677725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, attempting to model texts as graph structure and introducing graph
neural networks to deal with it has become a trend in many NLP research areas.
In this paper, we investigate whether the graph structure is necessary for
multi-hop question answering. Our analysis is centered on HotpotQA. We
construct a strong baseline model to establish that, with the proper use of
pre-trained models, graph structure may not be necessary for multi-hop question
answering. We point out that both graph structure and adjacency matrix are
task-related prior knowledge, and graph-attention can be considered as a
special case of self-attention. Experiments and visualized analysis demonstrate
that graph-attention or the entire graph structure can be replaced by
self-attention or Transformers.
Related papers
- Multimodal Multihop Source Retrieval for Web Question Answering [0.0]
This work deals with the challenge of learning and reasoning over multi-modal multi-hop question answering (QA)
We propose a graph reasoning network based on the semantic structure of the sentences to learn multi-source reasoning paths.
arXiv Detail & Related papers (2025-01-07T22:53:56Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Towards Graph-hop Retrieval and Reasoning in Complex Question Answering
over Textual Database [15.837457557803507]
Graph-Hop is a novel multi-chains and multi-hops retrieval and reasoning paradigm in complex question answering.
We construct a new benchmark called ReasonGraphQA, which provides explicit and fine-grained evidence graphs for complex questions.
arXiv Detail & Related papers (2023-05-23T16:28:42Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - From Shallow to Deep: Compositional Reasoning over Graphs for Visual
Question Answering [3.7094119304085584]
It is essential to learn to answer deeper questions that require compositional reasoning on the image and external knowledge.
We propose a Hierarchical Graph Neural Module Network (HGNMN) that reasons over multi-layer graphs with neural modules.
Our model consists of several well-designed neural modules that perform specific functions over graphs.
arXiv Detail & Related papers (2022-06-25T02:20:02Z) - Question-Answer Sentence Graph for Joint Modeling Answer Selection [122.29142965960138]
We train and integrate state-of-the-art (SOTA) models for computing scores between question-question, question-answer, and answer-answer pairs.
Online inference is then performed to solve the AS2 task on unseen queries.
arXiv Detail & Related papers (2022-02-16T05:59:53Z) - Graph2Graph Learning with Conditional Autoregressive Models [8.203106789678397]
We present a conditional auto-re model for graph-to-graph learning.
We illustrate its representational capabilities via experiments on challenging subgraph predictions from graph algorithmics.
arXiv Detail & Related papers (2021-06-06T20:28:07Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.