Back to the Future: Towards Explainable Temporal Reasoning with Large
Language Models
- URL: http://arxiv.org/abs/2310.01074v2
- Date: Sun, 8 Oct 2023 12:45:18 GMT
- Title: Back to the Future: Towards Explainable Temporal Reasoning with Large
Language Models
- Authors: Chenhan Yuan, Qianqian Xie, Jimin Huang and Sophia Ananiadou
- Abstract summary: We introduce the first task of explainable temporal reasoning, to predict an event's occurrence at a future timestamp based on context.
We show that our method achieves the state-of-the-art performance of temporal prediction and explanation.
- Score: 33.8108950744839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal reasoning is a crucial NLP task, providing a nuanced understanding
of time-sensitive contexts within textual data. Although recent advancements in
LLMs have demonstrated their potential in temporal reasoning, the predominant
focus has been on tasks such as temporal expression and temporal relation
extraction. These tasks are primarily designed for the extraction of direct and
past temporal cues and to engage in simple reasoning processes. A significant
gap remains when considering complex reasoning tasks such as event forecasting,
which requires multi-step temporal reasoning on events and prediction on the
future timestamp. Another notable limitation of existing methods is their
incapability to provide an illustration of their reasoning process, hindering
explainability. In this paper, we introduce the first task of explainable
temporal reasoning, to predict an event's occurrence at a future timestamp
based on context which requires multiple reasoning over multiple events, and
subsequently provide a clear explanation for their prediction. Our task offers
a comprehensive evaluation of both the LLMs' complex temporal reasoning
ability, the future event prediction ability, and explainability-a critical
attribute for AI applications. To support this task, we present the first
multi-source instruction-tuning dataset of explainable temporal reasoning
(ExpTime) with 26k derived from the temporal knowledge graph datasets and their
temporal reasoning paths, using a novel knowledge-graph-instructed-generation
strategy. Based on the dataset, we propose the first open-source LLM series
TimeLlaMA based on the foundation LlaMA2, with the ability of instruction
following for explainable temporal reasoning. We compare the performance of our
method and a variety of LLMs, where our method achieves the state-of-the-art
performance of temporal prediction and explanation.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question Answering [23.98067169669452]
Time-Sensitive Question Answering (TSQA) demands the effective utilization of specific temporal contexts.
We propose a novel framework that enhances temporal awareness and reasoning through Temporal Information-Aware Embedding and Granular Contrastive Reinforcement Learning.
arXiv Detail & Related papers (2024-09-25T13:13:21Z) - Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning? [70.19200858203388]
Temporal reasoning is fundamental for large language models to comprehend the world.
CoTempQA is a benchmark containing four co-temporal scenarios.
Our experiments reveal a significant gap between the performance of current LLMs and human-level reasoning.
arXiv Detail & Related papers (2024-06-13T12:56:21Z) - Temporal Knowledge Question Answering via Abstract Reasoning Induction [32.08799860090592]
This study addresses the challenge of enhancing temporal knowledge reasoning in Large Language Models (LLMs)
We propose Abstract Reasoning Induction (ARI) framework, which divides temporal reasoning into two distinct phases: Knowledge-agnostic and Knowledge-based.
Our approach achieves remarkable improvements, with relative gains of 29.7% and 9.27% on two temporal QA datasets.
arXiv Detail & Related papers (2023-11-15T17:46:39Z) - DetermLR: Augmenting LLM-based Logical Reasoning from Indeterminacy to Determinacy [76.58614128865652]
We propose DetermLR, a novel perspective that rethinks the reasoning process as an evolution from indeterminacy to determinacy.
First, we categorize known conditions into two types: determinate and indeterminate premises This provides an oveall direction for the reasoning process and guides LLMs in converting indeterminate data into progressively determinate insights.
We automate the storage and extraction of available premises and reasoning paths with reasoning memory, preserving historical reasoning details for subsequent reasoning steps.
arXiv Detail & Related papers (2023-10-28T10:05:51Z) - An Overview Of Temporal Commonsense Reasoning and Acquisition [20.108317515225504]
Temporal commonsense reasoning refers to the ability to understand the typical temporal context of phrases, actions, and events.
Recent research on the performance of large language models suggests that they often take shortcuts in their reasoning and fall prey to simple linguistic traps.
arXiv Detail & Related papers (2023-07-28T01:30:15Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - Temporal Reasoning on Implicit Events from Distant Supervision [91.20159064951487]
We propose a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events.
We find that state-of-the-art models struggle when predicting temporal relationships between implicit and explicit events.
We propose a neuro-symbolic temporal reasoning model, SYMTIME, which exploits distant supervision signals from large-scale text and uses temporal rules to infer end times.
arXiv Detail & Related papers (2020-10-24T03:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.