Temporal Reasoning on Implicit Events from Distant Supervision
- URL: http://arxiv.org/abs/2010.12753v2
- Date: Fri, 7 May 2021 21:07:45 GMT
- Title: Temporal Reasoning on Implicit Events from Distant Supervision
- Authors: Ben Zhou and Kyle Richardson and Qiang Ning and Tushar Khot and Ashish
Sabharwal and Dan Roth
- Abstract summary: We propose a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events.
We find that state-of-the-art models struggle when predicting temporal relationships between implicit and explicit events.
We propose a neuro-symbolic temporal reasoning model, SYMTIME, which exploits distant supervision signals from large-scale text and uses temporal rules to infer end times.
- Score: 91.20159064951487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose TRACIE, a novel temporal reasoning dataset that evaluates the
degree to which systems understand implicit events -- events that are not
mentioned explicitly in natural language text but can be inferred from it. This
introduces a new challenge in temporal reasoning research, where prior work has
focused on explicitly mentioned events. Human readers can infer implicit events
via commonsense reasoning, resulting in a more comprehensive understanding of
the situation and, consequently, better reasoning about time. We find, however,
that state-of-the-art models struggle when predicting temporal relationships
between implicit and explicit events. To address this, we propose a
neuro-symbolic temporal reasoning model, SYMTIME, which exploits distant
supervision signals from large-scale text and uses temporal rules to combine
start times and durations to infer end times. SYMTIME outperforms strong
baseline systems on TRACIE by 5%, and by 11% in a zero prior knowledge training
setting. Our approach also generalizes to other temporal reasoning tasks, as
evidenced by a gain of 1%-9% on MATRES, an explicit event benchmark.
Related papers
- Back to the Future: Towards Explainable Temporal Reasoning with Large
Language Models [33.8108950744839]
We introduce the first task of explainable temporal reasoning, to predict an event's occurrence at a future timestamp based on context.
We show that our method achieves the state-of-the-art performance of temporal prediction and explanation.
arXiv Detail & Related papers (2023-10-02T10:35:23Z) - Exploring the Limits of Historical Information for Temporal Knowledge
Graph Extrapolation [59.417443739208146]
We propose a new event forecasting model based on a novel training framework of historical contrastive learning.
CENET learns both the historical and non-historical dependency to distinguish the most potential entities.
We evaluate our proposed model on five benchmark graphs.
arXiv Detail & Related papers (2023-08-29T03:26:38Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z) - Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning [87.92209048521153]
Event temporal reasoning aims at identifying the temporal relations between two or more events from narratives.
Knowledge conflicts arise when there is a mismatch between the actual temporal relations of events in the context and the prior knowledge or biases learned by the model.
arXiv Detail & Related papers (2023-05-24T10:04:06Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - Logic and Commonsense-Guided Temporal Knowledge Graph Completion [9.868206060374991]
A temporal knowledge graph (TKG) stores the events derived from the data involving time.
We propose a Logic and Commonsense-Guided Embedding model (LCGE) to jointly learn the time-sensitive representation involving timeliness and causality of events.
arXiv Detail & Related papers (2022-11-30T10:06:55Z) - Temporal Common Sense Acquisition with Minimal Supervision [77.8308414884754]
This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense.
Our method is shown to give quality predictions of various dimensions of temporal common sense.
It also produces representations of events for relevant tasks such as duration comparison, parent-child relations, event coreference and temporal QA.
arXiv Detail & Related papers (2020-05-08T22:20:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.