Weakly Supervised Concept Map Generation through Task-Guided Graph
Translation
- URL: http://arxiv.org/abs/2110.15720v2
- Date: Mon, 1 Nov 2021 21:25:55 GMT
- Title: Weakly Supervised Concept Map Generation through Task-Guided Graph
Translation
- Authors: Jiaying Lu, Xiangjue Dong, Carl Yang
- Abstract summary: GT-D2G is an automatic concept map generation framework that leverages generalized NLP pipelines to derive semantic-rich initial graphs.
The quality and interpretability of such concept maps are validated through human evaluation on three real-world corpora.
- Score: 9.203403318435486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the rapid development of concept map generation
techniques due to their advantages in providing well-structured summarization
of knowledge from free texts. Traditional unsupervised methods do not generate
task-oriented concept maps, whereas deep generative models require large
amounts of training data. In this work, we present GT-D2G (Graph Translation
based Document-To-Graph), an automatic concept map generation framework that
leverages generalized NLP pipelines to derive semantic-rich initial graphs, and
translates them into more concise structures under the weak supervision of
document labels. The quality and interpretability of such concept maps are
validated through human evaluation on three real-world corpora, and their
utility in the downstream task is further demonstrated in the controlled
experiments with scarce document labels.
Related papers
- PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides [51.88536367177796]
We propose a two-stage, edit-based approach inspired by human drafts for automatically generating presentations.
PWTAgent first analyzes references to extract slide-level functional types and content schemas, then generates editing actions based on selected reference slides.
PWTAgent significantly outperforms existing automatic presentation generation methods across all three dimensions.
arXiv Detail & Related papers (2025-01-07T16:53:01Z) - Map2Text: New Content Generation from Low-Dimensional Visualizations [60.02149343347818]
We introduce Map2Text, a novel task that translates spatial coordinates within low-dimensional visualizations into new, coherent, and accurately aligned textual content.
This allows users to explore and navigate undiscovered information embedded in these spatial layouts interactively and intuitively.
arXiv Detail & Related papers (2024-12-24T20:16:13Z) - TAGExplainer: Narrating Graph Explanations for Text-Attributed Graph Learning Models [14.367754016281934]
This paper presents TAGExplainer, the first method designed to generate natural language explanations for TAG learning.
To address the lack of annotated ground truth explanations in real-world scenarios, we propose first generating pseudo-labels that capture the model's decisions from saliency-based explanations.
The high-quality pseudo-labels are finally utilized to train an end-to-end explanation generator model.
arXiv Detail & Related papers (2024-10-20T03:55:46Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Path-based Explanation for Knowledge Graph Completion [17.541247786437484]
Proper explanations for the results of GNN-based Knowledge Graph Completion models increase model transparency.
Existing practices for explaining KGC tasks rely on instance/subgraph-based approaches.
We propose Power-Link, the first path-based KGC explainer that explores GNN-based models.
arXiv Detail & Related papers (2024-01-04T14:19:37Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - From the One, Judge of the Whole: Typed Entailment Graph Construction
with Predicate Generation [69.91691115264132]
Entailment Graphs (EGs) are constructed to indicate context-independent entailment relations in natural languages.
In this paper, we propose a multi-stage method, Typed Predicate-Entailment Graph Generator (TP-EGG) to tackle this problem.
Experiments on benchmark datasets show that TP-EGG can generate high-quality and scale-controllable entailment graphs.
arXiv Detail & Related papers (2023-06-07T05:46:19Z) - Knowledge Graph Generation From Text [18.989264255589806]
We propose a novel end-to-end Knowledge Graph (KG) generation system from textual inputs.
The graph nodes are generated first using pretrained language model, followed by a simple edge construction head.
We evaluated the model on a recent WebNLG 2020 Challenge dataset, matching the state-of-the-art performance on text-to-RDF generation task.
arXiv Detail & Related papers (2022-11-18T21:27:13Z) - Improving Knowledge Graph Representation Learning by Structure
Contextual Pre-training [9.70121995251553]
We propose a novel pre-training-then-fine-tuning framework for knowledge graph representation learning.
A KG model is pre-trained with triple classification task, followed by discriminative fine-tuning on specific downstream tasks.
Experimental results demonstrate that fine-tuning SCoP not only outperforms results of baselines on a portfolio of downstream tasks but also avoids tedious task-specific model design and parameter training.
arXiv Detail & Related papers (2021-12-08T02:50:54Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation [100.79870384880333]
We propose a knowledge-grounded pre-training (KGPT) to generate knowledge-enriched text.
We adopt three settings, namely fully-supervised, zero-shot, few-shot to evaluate its effectiveness.
Under zero-shot setting, our model achieves over 30 ROUGE-L on WebNLG while all other baselines fail.
arXiv Detail & Related papers (2020-10-05T19:59:05Z) - Deep Graph Contrastive Representation Learning [23.37786673825192]
We propose a novel framework for unsupervised graph representation learning by leveraging a contrastive objective at the node level.
Specifically, we generate two graph views by corruption and learn node representations by maximizing the agreement of node representations in these two views.
We perform empirical experiments on both transductive and inductive learning tasks using a variety of real-world datasets.
arXiv Detail & Related papers (2020-06-07T11:50:45Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.