Cascading Large Language Models for Salient Event Graph Generation
- URL: http://arxiv.org/abs/2406.18449v2
- Date: Sat, 08 Feb 2025 17:23:18 GMT
- Title: Cascading Large Language Models for Salient Event Graph Generation
- Authors: Xingwei Tan, Yuxiang Zhou, Gabriele Pergola, Yulan He,
- Abstract summary: CALLMSAE is a CAscading Large Language Model framework for SAlient Event graph generation.
We first identify salient events by prompting LLMs to generate summaries.
We then develop an iterative code refinement prompting strategy to generate event relation graphs.
Powered by CALLMSAE, we present textitNYT-SEG, a large-scale automatically annotated event graph dataset.
- Score: 19.731605612333716
- License:
- Abstract: Generating event graphs from long documents is challenging due to the inherent complexity of multiple tasks involved such as detecting events, identifying their relationships, and reconciling unstructured input with structured graphs. Recent studies typically consider all events with equal importance, failing to distinguish salient events crucial for understanding narratives. This paper presents CALLMSAE, a CAscading Large Language Model framework for SAlient Event graph generation, which leverages the capabilities of LLMs and eliminates the need for costly human annotations. We first identify salient events by prompting LLMs to generate summaries, from which salient events are identified. Next, we develop an iterative code refinement prompting strategy to generate event relation graphs, removing hallucinated relations and recovering missing edges. Powered by CALLMSAE, we present \textit{NYT-SEG}, a large-scale automatically annotated event graph dataset which can serve as distant supervision signals. Fine-tuning contextualised graph generation models on \textit{NYT-SEG} outperforms the models trained on CAEVO data. Results on a human-annotated test set show that the proposed method generates salient and more accurate graphs, outperforming competitive baselines.
Related papers
- Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)
This framework provides a standardized setting to evaluate GNNs across diverse datasets.
We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - RAGraph: A General Retrieval-Augmented Graph Learning Framework [35.25522856244149]
We introduce a novel framework called General Retrieval-Augmented Graph Learning (RAGraph)
RAGraph brings external graph data into the general graph foundation model to improve model generalization on unseen scenarios.
During inference, the RAGraph adeptly retrieves similar toy graphs based on key similarities in downstream tasks.
arXiv Detail & Related papers (2024-10-31T12:05:21Z) - Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning [28.660326096652437]
We introduce AskGNN, a novel approach that bridges the gap between sequential text processing and graph-structured data.
AskGNN employs a Graph Neural Network (GNN)-powered structure-enhanced retriever to select labeled nodes across graphs.
Experiments across three tasks and seven LLMs demonstrate AskGNN's superior effectiveness in graph task performance.
arXiv Detail & Related papers (2024-10-09T17:19:12Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Prompt-based Graph Model for Joint Liberal Event Extraction and Event Schema Induction [1.3154296174423619]
Events are essential components of speech and texts, describing the changes in the state of entities.
The event extraction task aims to identify and classify events and find their participants according to event schemas.
The researchers propose Liberal Event Extraction (LEE), which aims to extract events and discover event schemas simultaneously.
arXiv Detail & Related papers (2024-03-19T07:56:42Z) - Disentangled Representation Learning with Large Language Models for
Text-Attributed Graphs [57.052160123387104]
We present the Disentangled Graph-Text Learner (DGTL) model, which is able to enhance the reasoning and predicting capabilities of LLMs for TAGs.
Our proposed DGTL model incorporates graph structure information through tailored disentangled graph neural network (GNN) layers.
Experimental evaluations demonstrate the effectiveness of the proposed DGTL model on achieving superior or comparable performance over state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-27T14:00:04Z) - Leveraging Large Language Models for Node Generation in Few-Shot Learning on Text-Attributed Graphs [5.587264586806575]
We propose a plug-and-play approach to empower text-attributed graphs through node generation using Large Language Models (LLMs)
LLMs extract semantic information from labels and generate samples that belong to categories as exemplars.
We employ an edge predictor to capture structural information inherent in the raw dataset and integrate the newly generated samples into the original graph.
arXiv Detail & Related papers (2023-10-15T16:04:28Z) - Scene Graph Modification as Incremental Structure Expanding [61.84291817776118]
We focus on scene graph modification (SGM), where the system is required to learn how to update an existing scene graph based on a natural language query.
We frame SGM as a graph expansion task by introducing the incremental structure expanding (ISE)
We construct a challenging dataset that contains more complicated queries and larger scene graphs than existing datasets.
arXiv Detail & Related papers (2022-09-15T16:26:14Z) - Semi-Supervised Graph Attention Networks for Event Representation
Learning [0.0]
This paper presents GNEE (GAT Neural Event Embeddings), a method that combines Graph Attention Networks and Graph Regularization.
A statistical analysis of experimental results with five real-world event graphs and six graph embedding methods shows that our GNEE outperforms state-of-the-art semi-supervised graph embedding methods.
arXiv Detail & Related papers (2022-01-02T14:38:28Z) - Neural Language Modeling for Contextualized Temporal Graph Generation [49.21890450444187]
This paper presents the first study on using large-scale pre-trained language models for automated generation of an event-level temporal graph for a document.
arXiv Detail & Related papers (2020-10-20T07:08:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.