Event Extraction in Large Language Model
- URL: http://arxiv.org/abs/2512.19537v1
- Date: Mon, 22 Dec 2025 16:22:14 GMT
- Title: Event Extraction in Large Language Model
- Authors: Bobo Li, Xudong Han, Jiang Liu, Yuzhe Ding, Liqiang Jing, Zhaoqi Zhang, Jinheng Li, Xinya Du, Fei Li, Meishan Zhang, Min Zhang, Aixin Sun, Philip S. Yu, Hao Fei,
- Abstract summary: We argue that EE should be viewed as a system component that provides a cognitive scaffold for LLM centered solutions.<n>This survey covers EE in text and multimodal settings, organizing tasks and taxonomy, tracing method evolution from rule based and neural models to instruction driven and generative frameworks.
- Score: 99.94321497574805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) and multimodal LLMs are changing event extraction (EE): prompting and generation can often produce structured outputs in zero shot or few shot settings. Yet LLM based pipelines face deployment gaps, including hallucinations under weak constraints, fragile temporal and causal linking over long contexts and across documents, and limited long horizon knowledge management within a bounded context window. We argue that EE should be viewed as a system component that provides a cognitive scaffold for LLM centered solutions. Event schemas and slot constraints create interfaces for grounding and verification; event centric structures act as controlled intermediate representations for stepwise reasoning; event links support relation aware retrieval with graph based RAG; and event stores offer updatable episodic and agent memory beyond the context window. This survey covers EE in text and multimodal settings, organizing tasks and taxonomy, tracing method evolution from rule based and neural models to instruction driven and generative frameworks, and summarizing formulations, decoding strategies, architectures, representations, datasets, and evaluation. We also review cross lingual, low resource, and domain specific settings, and highlight open challenges and future directions for reliable event centric systems. Finally, we outline open challenges and future directions that are central to the LLM era, aiming to evolve EE from static extraction into a structurally reliable, agent ready perception and memory layer for open world systems.
Related papers
- A Simple Yet Strong Baseline for Long-Term Conversational Memory of LLM Agents [4.706565675142042]
LLM-based conversational agents still struggle to maintain coherent, personalized interaction over many sessions.<n>Motivated by neo-Davidsonian event semantics, we propose an event-centric alternative that represents conversational history as short, event-like propositions.<n>Our design aims to preserve information in a non-compressive form and make it more accessible, rather than more lossy.
arXiv Detail & Related papers (2025-11-21T12:41:17Z) - SCoPE VLM: Selective Context Processing for Efficient Document Navigation in Vision-Language Models [0.0]
Understanding long-context visual information remains a fundamental challenge for vision-language models.<n>We propose SCoPE VLM, a document navigation expert that leverages a novel Chain of Scroll mechanism.<n>SCoPE VLM is the first framework to explicitly model agentic reading patterns in multi-page document question answering.
arXiv Detail & Related papers (2025-10-22T17:47:12Z) - Scaling Beyond Context: A Survey of Multimodal Retrieval-Augmented Generation for Document Understanding [61.36285696607487]
Document understanding is critical for applications from financial analysis to scientific discovery.<n>Current approaches, whether OCR-based pipelines feeding Large Language Models (LLMs) or native Multimodal LLMs (MLLMs) face key limitations.<n>Retrieval-Augmented Generation (RAG) helps ground models in external data, but documents' multimodal nature, combining text, tables, charts, and layout, demands a more advanced paradigm: Multimodal RAG.
arXiv Detail & Related papers (2025-10-17T02:33:16Z) - GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models [59.72897499248909]
We propose a novel graph retriever trained end-to-end with Large Language Models (LLMs)<n>Within the extracted subgraph, structural knowledge and semantic features are encoded via soft tokens and the verbalized graph, respectively, which are infused into the LLM together.<n>Our approach consistently achieves state-of-the-art performance, validating the strength of joint graph-LLM optimization for complex reasoning tasks.
arXiv Detail & Related papers (2025-09-20T02:38:00Z) - TransLLM: A Unified Multi-Task Foundation Framework for Urban Transportation via Learnable Prompting [26.764515296168145]
Small-scale deep learning models are task-hungry and data-hungry, limiting their generalizability across diverse scenarios.<n>We propose TransLLM, a unified framework that integrates modeling with large language models through learnable prompt composition.<n>Our approach features a lightweight encoder that captures complex dependencies via dilated temporal convolutions and dual-adjacency graph attention networks.
arXiv Detail & Related papers (2025-08-20T15:27:49Z) - Data Dependency-Aware Code Generation from Enhanced UML Sequence Diagrams [54.528185120850274]
We propose a novel step-by-step code generation framework named API2Dep.<n>First, we introduce an enhanced Unified Modeling Language (UML) API diagram tailored for service-oriented architectures.<n>Second, recognizing the critical role of data flow, we introduce a dedicated data dependency inference task.
arXiv Detail & Related papers (2025-08-05T12:28:23Z) - Beyond Isolated Dots: Benchmarking Structured Table Construction as Deep Knowledge Extraction [80.88654868264645]
Arranged and Organized Extraction Benchmark designed to evaluate ability of large language models to comprehend fragmented documents.<n>AOE includes 11 carefully crafted tasks across three diverse domains, requiring models to generate context-specific schema tailored to varied input queries.<n>Results show that even the most advanced models struggled significantly.
arXiv Detail & Related papers (2025-07-22T06:37:51Z) - A Comprehensive Review on Harnessing Large Language Models to Overcome Recommender System Challenges [5.750235776275005]
Large Language Models (LLMs) can be leveraged to tackle key challenges in recommender systems.<n>LLMs enhance personalization, semantic alignment, and interpretability without requiring extensive task-specific supervision.<n>LLMs enable zero- and few-shot reasoning, allowing systems to operate effectively in cold-start and long-tail scenarios.
arXiv Detail & Related papers (2025-07-17T06:03:57Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.