Counterfactual-Consistency Prompting for Relative Temporal Understanding in Large Language Models
- URL: http://arxiv.org/abs/2502.11425v1
- Date: Mon, 17 Feb 2025 04:37:07 GMT
- Title: Counterfactual-Consistency Prompting for Relative Temporal Understanding in Large Language Models
- Authors: Jongho Kim, Seung-won Hwang,
- Abstract summary: We tackle the issue of temporal inconsistency in large language models (LLMs) by proposing a novel counterfactual prompting approach.
Our method generates counterfactual questions and enforces collective constraints, enhancing the model's consistency.
We evaluate our method on multiple datasets, demonstrating significant improvements in event ordering for explicit and implicit events and temporal commonsense understanding.
- Score: 24.586475741345616
- License:
- Abstract: Despite the advanced capabilities of large language models (LLMs), their temporal reasoning ability remains underdeveloped. Prior works have highlighted this limitation, particularly in maintaining temporal consistency when understanding events. For example, models often confuse mutually exclusive temporal relations like ``before'' and ``after'' between events and make inconsistent predictions. In this work, we tackle the issue of temporal inconsistency in LLMs by proposing a novel counterfactual prompting approach. Our method generates counterfactual questions and enforces collective constraints, enhancing the model's consistency. We evaluate our method on multiple datasets, demonstrating significant improvements in event ordering for explicit and implicit events and temporal commonsense understanding by effectively addressing temporal inconsistencies.
Related papers
- TempoGPT: Enhancing Temporal Reasoning via Quantizing Embedding [13.996105878417204]
We propose a multi-modal time series data construction approach and a multi-modal time series language model (TLM), TempoGPT.
We construct multi-modal data for complex reasoning tasks by analyzing the variable-system relationships within a white-box system.
Extensive experiments demonstrate that TempoGPT accurately perceives temporal information, logically infers conclusions, and achieves state-of-the-art in the constructed complex time series reasoning tasks.
arXiv Detail & Related papers (2025-01-13T13:47:05Z) - Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning? [70.19200858203388]
Temporal reasoning is fundamental for large language models to comprehend the world.
CoTempQA is a benchmark containing four co-temporal scenarios.
Our experiments reveal a significant gap between the performance of current LLMs and human-level reasoning.
arXiv Detail & Related papers (2024-06-13T12:56:21Z) - From Link Prediction to Forecasting: Addressing Challenges in Batch-based Temporal Graph Learning [0.716879432974126]
We show that the suitability of common batch-oriented evaluation depends on the datasets' characteristics.
For continuous-time temporal graphs, fixed-size batches create time windows with different durations, resulting in an inconsistent dynamic link prediction task.
For discrete-time temporal graphs, the sequence of batches can additionally introduce temporal dependencies that are not present in the data.
arXiv Detail & Related papers (2024-06-07T12:45:12Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - MTGER: Multi-view Temporal Graph Enhanced Temporal Reasoning over
Time-Involved Document [26.26604509399347]
MTGER is a novel framework for temporal reasoning over time-involved documents.
It explicitly models the temporal relationships among facts by multi-view temporal graphs.
We show that MTGER gives more consistent answers under question perturbations.
arXiv Detail & Related papers (2023-11-08T16:41:37Z) - TIMELINE: Exhaustive Annotation of Temporal Relations Supporting the
Automatic Ordering of Events in News Articles [4.314956204483074]
This paper presents a new annotation scheme that clearly defines the criteria based on which temporal relations should be annotated.
We also propose a method for annotating all temporal relations -- including long-distance ones -- which automates the process.
The result is a new dataset, the TIMELINE corpus, in which improved inter-annotator agreement was obtained.
arXiv Detail & Related papers (2023-10-26T22:23:38Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - Temporal Reasoning on Implicit Events from Distant Supervision [91.20159064951487]
We propose a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events.
We find that state-of-the-art models struggle when predicting temporal relationships between implicit and explicit events.
We propose a neuro-symbolic temporal reasoning model, SYMTIME, which exploits distant supervision signals from large-scale text and uses temporal rules to infer end times.
arXiv Detail & Related papers (2020-10-24T03:12:27Z) - Joint Constrained Learning for Event-Event Relation Extraction [94.3499255880101]
We propose a joint constrained learning framework for modeling event-event relations.
Specifically, the framework enforces logical constraints within and across multiple temporal and subevent relations.
We show that our joint constrained learning approach effectively compensates for the lack of jointly labeled data.
arXiv Detail & Related papers (2020-10-13T22:45:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.