Interactive Machine Comprehension with Dynamic Knowledge Graphs
- URL: http://arxiv.org/abs/2109.00077v1
- Date: Tue, 31 Aug 2021 21:05:22 GMT
- Title: Interactive Machine Comprehension with Dynamic Knowledge Graphs
- Authors: Xingdi Yuan
- Abstract summary: Interactive machine reading comprehension (iMRC) is machine comprehension tasks where knowledge sources are partially observable.
We hypothesize that graph representations are good inductive biases, which can serve as an agent's memory mechanism in iMRC tasks.
We describe methods that dynamically build and update these graphs during information gathering, as well as neural models to encode graph representations in RL agents.
- Score: 9.599169515136436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interactive machine reading comprehension (iMRC) is machine comprehension
tasks where knowledge sources are partially observable. An agent must interact
with an environment sequentially to gather necessary knowledge in order to
answer a question. We hypothesize that graph representations are good inductive
biases, which can serve as an agent's memory mechanism in iMRC tasks. We
explore four different categories of graphs that can capture text information
at various levels. We describe methods that dynamically build and update these
graphs during information gathering, as well as neural models to encode graph
representations in RL agents. Extensive experiments on iSQuAD suggest that
graph representations can result in significant performance improvements for RL
agents.
Related papers
- Learning From Graph-Structured Data: Addressing Design Issues and Exploring Practical Applications in Graph Representation Learning [2.492884361833709]
We present an exhaustive review of the latest advancements in graph representation learning and Graph Neural Networks (GNNs)
GNNs, tailored to handle graph-structured data, excel in deriving insights and predictions from intricate relational information.
Our work delves into the capabilities of GNNs, examining their foundational designs and their application in addressing real-world challenges.
arXiv Detail & Related papers (2024-11-09T19:10:33Z) - Dynamic and Textual Graph Generation Via Large-Scale LLM-based Agent Simulation [70.60461609393779]
GraphAgent-Generator (GAG) is a novel simulation-based framework for dynamic graph generation.
Our framework effectively replicates seven macro-level structural characteristics in established network science theories.
It supports generating graphs with up to nearly 100,000 nodes or 10 million edges, with a minimum speed-up of 90.4%.
arXiv Detail & Related papers (2024-10-13T12:57:08Z) - Verbalized Graph Representation Learning: A Fully Interpretable Graph Model Based on Large Language Models Throughout the Entire Process [8.820909397907274]
We propose a verbalized graph representation learning (VGRL) method which is fully interpretable.
In contrast to traditional graph machine learning models, VGRL constrains this parameter space to be text description.
We conduct several studies to empirically evaluate the effectiveness of VGRL.
arXiv Detail & Related papers (2024-10-02T12:07:47Z) - Informative Subgraphs Aware Masked Auto-Encoder in Dynamic Graphs [1.3571543090749625]
We introduce a constrained probabilistic generative model to generate informative subgraphs that guide the evolution of dynamic graphs.
The informative subgraph identified by DyGIS will serve as the input of dynamic graph masked autoencoder (DGMAE)
arXiv Detail & Related papers (2024-09-14T02:16:00Z) - Graph Attention Inference of Network Topology in Multi-Agent Systems [0.0]
Our work introduces a novel machine learning-based solution that leverages the attention mechanism to predict future states of multi-agent systems.
The graph structure is then inferred from the strength of the attention values.
Our results demonstrate that the presented data-driven graph attention machine learning model can identify the network topology in multi-agent systems.
arXiv Detail & Related papers (2024-08-27T23:58:51Z) - When Graph Data Meets Multimodal: A New Paradigm for Graph Understanding
and Reasoning [54.84870836443311]
The paper presents a new paradigm for understanding and reasoning about graph data by integrating image encoding and multimodal technologies.
This approach enables the comprehension of graph data through an instruction-response format, utilizing GPT-4V's advanced capabilities.
The study evaluates this paradigm on various graph types, highlighting the model's strengths and weaknesses, particularly in Chinese OCR performance and complex reasoning tasks.
arXiv Detail & Related papers (2023-12-16T08:14:11Z) - Learning Representation over Dynamic Graph using Aggregation-Diffusion
Mechanism [4.729833950299859]
We propose an aggregation-diffusion (AD) mechanism that actively propagates information to its neighbor by diffusion after the node updates its embedding through the aggregation mechanism.
In experiments on two real-world datasets in the dynamic link prediction task, the AD mechanism outperforms the baseline models that only use aggregation to propagate information.
arXiv Detail & Related papers (2021-06-03T08:25:42Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - Jointly Cross- and Self-Modal Graph Attention Network for Query-Based
Moment Localization [77.21951145754065]
We propose a novel Cross- and Self-Modal Graph Attention Network (CSMGAN) that recasts this task as a process of iterative messages passing over a joint graph.
Our CSMGAN is able to effectively capture high-order interactions between two modalities, thus enabling a further precise localization.
arXiv Detail & Related papers (2020-08-04T08:25:24Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.