Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs
in Language Pretraining
- URL: http://arxiv.org/abs/2312.00874v1
- Date: Fri, 1 Dec 2023 19:03:38 GMT
- Title: Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs
in Language Pretraining
- Authors: Jingcong Liang, Rong Ye, Meng Han, Qi Zhang, Ruofei Lai, Xinyu Zhang,
Zhao Cao, Xuanjing Huang, Zhongyu Wei
- Abstract summary: We propose the Hierarchical Argumentation Graph (Hi-ArG), a new structure to organize arguments.
We also introduce two approaches to exploit Hi-ArG, including a text-graph multi-modal model GreaseArG and a new pre-training framework augmented with graph information.
- Score: 62.069374456021016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The knowledge graph is a structure to store and represent knowledge, and
recent studies have discussed its capability to assist language models for
various applications. Some variations of knowledge graphs aim to record
arguments and their relations for computational argumentation tasks. However,
many must simplify semantic types to fit specific schemas, thus losing
flexibility and expression ability. In this paper, we propose the Hierarchical
Argumentation Graph (Hi-ArG), a new structure to organize arguments. We also
introduce two approaches to exploit Hi-ArG, including a text-graph multi-modal
model GreaseArG and a new pre-training framework augmented with graph
information. Experiments on two argumentation tasks have shown that after
further pre-training and fine-tuning, GreaseArG supersedes same-scale language
models on these tasks, while incorporating graph information during further
pre-training can also improve the performance of vanilla language models. Code
for this paper is available at https://github.com/ljcleo/Hi-ArG .
Related papers
- G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - When Graph Data Meets Multimodal: A New Paradigm for Graph Understanding
and Reasoning [54.84870836443311]
The paper presents a new paradigm for understanding and reasoning about graph data by integrating image encoding and multimodal technologies.
This approach enables the comprehension of graph data through an instruction-response format, utilizing GPT-4V's advanced capabilities.
The study evaluates this paradigm on various graph types, highlighting the model's strengths and weaknesses, particularly in Chinese OCR performance and complex reasoning tasks.
arXiv Detail & Related papers (2023-12-16T08:14:11Z) - GRENADE: Graph-Centric Language Model for Self-Supervised Representation
Learning on Text-Attributed Graphs [22.282756544376493]
We develop a novel Graph-Centric Language model, GRENADE, to solve the problem of self-supervised representation learning on text-attributed graphs.
GRENADE exploits the synergistic effect of both pre-trained language model and graph neural network by optimizing with two specialized self-supervised learning algorithms.
The proposed graph-centric self-supervised learning algorithms effectively help GRENADE to capture informative textual semantics as well as structural context information on text-attributed graphs.
arXiv Detail & Related papers (2023-10-23T17:18:35Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - One for All: Towards Training One Graph Model for All Classification Tasks [61.656962278497225]
A unified model for various graph tasks remains underexplored, primarily due to the challenges unique to the graph learning domain.
We propose textbfOne for All (OFA), the first general framework that can use a single graph model to address the above challenges.
OFA performs well across different tasks, making it the first general-purpose across-domains classification model on graphs.
arXiv Detail & Related papers (2023-09-29T21:15:26Z) - Enhancing Dialogue Generation via Dynamic Graph Knowledge Aggregation [23.54754465832362]
In conventional graph neural networks (GNNs) message passing on a graph is independent from text.
This training regime leads to a semantic gap between graph knowledge and text.
We propose a novel framework for knowledge graph enhanced dialogue generation.
arXiv Detail & Related papers (2023-06-28T13:21:00Z) - KGLM: Integrating Knowledge Graph Structure in Language Models for Link
Prediction [0.0]
We introduce a new entity/relation embedding layer that learns to differentiate distinctive entity and relation types.
We show that further pre-training the language models with this additional embedding layer using the triples extracted from the knowledge graph, followed by the standard fine-tuning phase sets a new state-of-the-art performance for the link prediction task on the benchmark datasets.
arXiv Detail & Related papers (2022-11-04T20:38:12Z) - GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text
Generation [3.593955557310285]
Recent improvements in KG-to-text generation are due to auxiliary pre-training tasks designed to give the fine-tune task a boost in performance.
Here, we demonstrate that by fusing graph-aware elements into existing pre-trained language models, we are able to outperform state-of-the-art models and close the gap imposed by additional pre-training tasks.
arXiv Detail & Related papers (2022-04-13T23:53:37Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.