Timeline-based Sentence Decomposition with In-Context Learning for Temporal Fact Extraction
- URL: http://arxiv.org/abs/2405.10288v3
- Date: Tue, 18 Jun 2024 08:22:29 GMT
- Title: Timeline-based Sentence Decomposition with In-Context Learning for Temporal Fact Extraction
- Authors: Jianhao Chen, Haoyuan Ouyang, Junyang Ren, Wentao Ding, Wei Hu, Yuzhong Qu,
- Abstract summary: In this paper, we specifically address the extraction of temporal facts from natural language text.
We propose a timeline-based sentence decomposition strategy using large language models (LLMs) with in-context learning.
Our experiments show that TSDRE achieves state-of-the-art results on both HyperRED-Temporal and ComplexTRED datasets.
- Score: 19.96263282146533
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Facts extraction is pivotal for constructing knowledge graphs. Recently, the increasing demand for temporal facts in downstream tasks has led to the emergence of the task of temporal fact extraction. In this paper, we specifically address the extraction of temporal facts from natural language text. Previous studies fail to handle the challenge of establishing time-to-fact correspondences in complex sentences. To overcome this hurdle, we propose a timeline-based sentence decomposition strategy using large language models (LLMs) with in-context learning, ensuring a fine-grained understanding of the timeline associated with various facts. In addition, we evaluate the performance of LLMs for direct temporal fact extraction and get unsatisfactory results. To this end, we introduce TSDRE, a method that incorporates the decomposition capabilities of LLMs into the traditional fine-tuning of smaller pre-trained language models (PLMs). To support the evaluation, we construct ComplexTRED, a complex temporal fact extraction dataset. Our experiments show that TSDRE achieves state-of-the-art results on both HyperRED-Temporal and ComplexTRED datasets.
Related papers
- Understanding Synthetic Context Extension via Retrieval Heads [51.8869530817334]
We investigate fine-tuning on synthetic data for three long-context tasks that require retrieval and reasoning.
We find that models trained on synthetic data fall short of the real data, but surprisingly, the mismatch can be interpreted.
Our results shed light on how to interpret synthetic data fine-tuning performance and how to approach creating better data for learning real-world capabilities over long contexts.
arXiv Detail & Related papers (2024-10-29T17:55:00Z) - Enhancing Temporal Understanding in LLMs for Semi-structured Tables [50.59009084277447]
We conduct a comprehensive analysis of temporal datasets to pinpoint the specific limitations of large language models (LLMs)
Our investigation leads to enhancements in TempTabQA, a dataset specifically designed for temporal temporal question answering.
We introduce a novel approach, C.L.E.A.R. to strengthen LLM capabilities in this domain.
arXiv Detail & Related papers (2024-07-22T20:13:10Z) - Event Temporal Relation Extraction based on Retrieval-Augmented on LLMs [21.888482292039956]
Event temporal relation (TempRel) is a primary subject of the event relation extraction task.
Traditional manually designed templates struggle to extract precise temporal knowledge.
This paper introduces a novel retrieval-augmented TempRel extraction approach.
arXiv Detail & Related papers (2024-03-22T15:16:10Z) - Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic [51.967603572656266]
We introduce a consistent and theoretically grounded approach to annotating decompositional entailment.
We find that our new dataset, RDTE, has a substantially higher internal consistency (+9%) than prior decompositional entailment datasets.
We also find that training an RDTE-oriented entailment classifier via knowledge distillation and employing it in an entailment tree reasoning engine significantly improves both accuracy and proof quality.
arXiv Detail & Related papers (2024-02-22T18:55:17Z) - Chain of History: Learning and Forecasting with LLMs for Temporal
Knowledge Graph Completion [24.545917737620197]
Temporal Knowledge Graph Completion (TKGC) is a complex task involving the prediction of missing event links at future timestamps.
This paper aims to provide a comprehensive perspective on harnessing the advantages of Large Language Models for reasoning in temporal knowledge graphs.
arXiv Detail & Related papers (2024-01-11T17:42:47Z) - A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia [57.31074448586854]
Large language models (LLMs) have an impressive ability to draw on novel information supplied in their context.
Yet the mechanisms underlying this contextual grounding remain unknown.
We present a novel method to study grounding abilities using Fakepedia.
arXiv Detail & Related papers (2023-12-04T17:35:42Z) - Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning [73.51314109184197]
It is crucial for large language models (LLMs) to understand the concept of temporal knowledge.
We propose a complex temporal question-answering dataset Complex-TR that focuses on multi-answer and multi-hop temporal reasoning.
arXiv Detail & Related papers (2023-11-16T11:49:29Z) - SWoTTeD: An Extension of Tensor Decomposition to Temporal Phenotyping [0.0]
We propose SWoTTeD (Sliding Window for Temporal Decomposition), a novel method to discover hidden temporal patterns.
We validate our proposal using both synthetic and real-world datasets, and we present an original usecase using data from the Greater Paris University Hospital.
The results show that SWoTTeD achieves at least as accurate reconstruction as recent state-of-the-art tensor decomposition models.
arXiv Detail & Related papers (2023-10-02T13:42:11Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z) - Temporal Relation Extraction with a Graph-Based Deep Biaffine Attention
Model [0.0]
We propose a novel temporal information extraction model based on deep biaffine attention.
We experimentally demonstrate that our model achieves state-of-the-art performance in temporal relation extraction.
arXiv Detail & Related papers (2022-01-16T19:40:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.