Coreference Graph Guidance for Mind-Map Generation
- URL: http://arxiv.org/abs/2312.11997v1
- Date: Tue, 19 Dec 2023 09:39:27 GMT
- Title: Coreference Graph Guidance for Mind-Map Generation
- Authors: Zhuowei Zhang, Mengting Hu, Yinhao Bai, Zhen Zhang
- Abstract summary: Recently, a state-of-the-art method encodes the sentences of a document sequentially and converts them to a relation graph via sequence-to-graph.
We propose a coreference-guided mind-map generation network (CMGN) to incorporate external structure knowledge.
- Score: 5.289044688419791
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Mind-map generation aims to process a document into a hierarchical structure
to show its central idea and branches. Such a manner is more conducive to
understanding the logic and semantics of the document than plain text.
Recently, a state-of-the-art method encodes the sentences of a document
sequentially and converts them to a relation graph via sequence-to-graph.
Though this method is efficient to generate mind-maps in parallel, its
mechanism focuses more on sequential features while hardly capturing structural
information. Moreover, it's difficult to model long-range semantic relations.
In this work, we propose a coreference-guided mind-map generation network
(CMGN) to incorporate external structure knowledge. Specifically, we construct
a coreference graph based on the coreference semantic relationship to introduce
the graph structure information. Then we employ a coreference graph encoder to
mine the potential governing relations between sentences. In order to exclude
noise and better utilize the information of the coreference graph, we adopt a
graph enhancement module in a contrastive learning manner. Experimental results
demonstrate that our model outperforms all the existing methods. The case study
further proves that our model can more accurately and concisely reveal the
structure and semantics of a document. Code and data are available at
https://github.com/Cyno2232/CMGN.
Related papers
- A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction [12.286432133599355]
Document-level Event Argument Extraction (DEAE) aims to identify arguments and their specific roles from an unstructured document.
advanced approaches on DEAE utilize prompt-based methods to guide pre-trained language models (PLMs) in extracting arguments from input documents.
We propose a semantic mention Graph Augmented Model (GAM) to address these two problems in this paper.
arXiv Detail & Related papers (2024-03-12T08:58:07Z) - GraphEdit: Large Language Models for Graph Structure Learning [62.618818029177355]
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data.
Existing GSL methods heavily depend on explicit graph structural information as supervision signals.
We propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data.
arXiv Detail & Related papers (2024-02-23T08:29:42Z) - GraphKD: Exploring Knowledge Distillation Towards Document Object
Detection with Structured Graph Creation [14.511401955827875]
Object detection in documents is a key step to automate the structural elements identification process.
We present a graph-based knowledge distillation framework to correctly identify and localize the document objects in a document image.
arXiv Detail & Related papers (2024-02-17T23:08:32Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - FactGraph: Evaluating Factuality in Summarization with Semantic Graph
Representations [114.94628499698096]
We propose FactGraph, a method that decomposes the document and the summary into structured meaning representations (MRs)
MRs describe core semantic concepts and their relations, aggregating the main content in both document and summary in a canonical form, and reducing data sparsity.
Experiments on different benchmarks for evaluating factuality show that FactGraph outperforms previous approaches by up to 15%.
arXiv Detail & Related papers (2022-04-13T16:45:33Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Sparse Structure Learning via Graph Neural Networks for Inductive
Document Classification [2.064612766965483]
We propose a novel GNN-based sparse structure learning model for inductive document classification.
Our model collects a set of trainable edges connecting disjoint words between sentences and employs structure learning to sparsely select edges with dynamic contextual dependencies.
Experiments on several real-world datasets demonstrate that the proposed model outperforms most state-of-the-art results.
arXiv Detail & Related papers (2021-12-13T02:36:04Z) - Efficient Mind-Map Generation via Sequence-to-Graph and Reinforced Graph
Refinement [21.580450836713577]
A mind-map is a diagram that represents the central concept and key ideas in a hierarchical way.
The existing automatic mind-map generation method extracts the relationships of every sentence pair to generate the directed semantic graph.
We propose an efficient mind-map generation network that converts a document into a graph via sequence-to-graph.
arXiv Detail & Related papers (2021-09-06T13:41:19Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Iterative Context-Aware Graph Inference for Visual Dialog [126.016187323249]
We propose a novel Context-Aware Graph (CAG) neural network.
Each node in the graph corresponds to a joint semantic feature, including both object-based (visual) and history-related (textual) context representations.
arXiv Detail & Related papers (2020-04-05T13:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.