EVIT: Event-Oriented Instruction Tuning for Event Reasoning
- URL: http://arxiv.org/abs/2404.11978v1
- Date: Thu, 18 Apr 2024 08:14:53 GMT
- Title: EVIT: Event-Oriented Instruction Tuning for Event Reasoning
- Authors: Zhengwei Tao, Xiancai Chen, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yiwei Lou,
- Abstract summary: Event reasoning aims to infer events according to certain relations and predict future events.
Large language models (LLMs) have made significant advancements in event reasoning owing to their wealth of knowledge and reasoning capabilities.
However, smaller instruction-tuned models currently in use do not consistently demonstrate exceptional proficiency in managing these tasks.
- Score: 18.012724531672813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Events refer to specific occurrences, incidents, or happenings that take place under a particular background. Event reasoning aims to infer events according to certain relations and predict future events. The cutting-edge techniques for event reasoning play a crucial role in various natural language processing applications. Large language models (LLMs) have made significant advancements in event reasoning owing to their wealth of knowledge and reasoning capabilities. However, smaller instruction-tuned models currently in use do not consistently demonstrate exceptional proficiency in managing these tasks. This discrepancy arises from the absence of explicit modeling of events and the interconnections of them within their instruction data. Consequently, these models face challenges in comprehending event structures and semantics while struggling to bridge the gap between their interpretations and human understanding of events. Additionally, their limitations in grasping event relations lead to constrained event reasoning abilities to effectively deduce and incorporate pertinent event knowledge. In this paper, we propose Event-Oriented Instruction Tuning (EvIT) to train our LLM. Specifically, we first propose a novel structure named event quadruple which contains the structure and semantics of events and is complete in the event representation. We then design event-relation learning based on the structures. We encapsulate the learning into the instruction-tuning formulation to better stimulate the event reasoning capacity of our model. We design a heuristic unsupervised method to mine event quadruple from a large-scale corpus. At last, we finetune a Llama model on our Event-Oriented Instruction Tuning. We conduct extensive experiments on event reasoning tasks on several datasets. Automatic and human evaluations demonstrate EvIT achieves competitive performances on event reasoning.
Related papers
- OpenEP: Open-Ended Future Event Prediction [57.63525290892786]
We introduce OpenEP (an Open-Ended Future Event Prediction task), which generates flexible and diverse predictions aligned with real-world scenarios.
For question construction, we pose questions from seven perspectives, including location, time, event development, event outcome, event impact, event response, and other.
For outcome construction, we collect free-form text containing the outcomes as ground truth to provide semantically complete and detail-enriched outcomes.
arXiv Detail & Related papers (2024-08-13T02:35:54Z) - MAVEN-Fact: A Large-scale Event Factuality Detection Dataset [55.01875707021496]
We introduce MAVEN-Fact, a large-scale and high-quality EFD dataset based on the MAVEN dataset.
MAVEN-Fact includes factuality annotations of 112,276 events, making it the largest EFD dataset.
Experiments demonstrate that MAVEN-Fact is challenging for both conventional fine-tuned models and large language models (LLMs)
arXiv Detail & Related papers (2024-07-22T03:43:46Z) - Prompt-based Graph Model for Joint Liberal Event Extraction and Event Schema Induction [1.3154296174423619]
Events are essential components of speech and texts, describing the changes in the state of entities.
The event extraction task aims to identify and classify events and find their participants according to event schemas.
The researchers propose Liberal Event Extraction (LEE), which aims to extract events and discover event schemas simultaneously.
arXiv Detail & Related papers (2024-03-19T07:56:42Z) - Enhancing Event Causality Identification with Rationale and Structure-Aware Causal Question Answering [30.000134835133522]
Event Causality Identification (DECI) aims to identify causal relations between two events in documents.
Recent research tends to use pre-trained language models to generate the event causal relations.
We propose a multi-task learning framework to enhance event causality identification with rationale and structure-aware causal question answering.
arXiv Detail & Related papers (2024-03-17T07:41:58Z) - Improving Event Definition Following For Zero-Shot Event Detection [66.27883872707523]
Existing approaches on zero-shot event detection usually train models on datasets annotated with known event types.
We aim to improve zero-shot event detection by training models to better follow event definitions.
arXiv Detail & Related papers (2024-03-05T01:46:50Z) - Event Causality Extraction with Event Argument Correlations [13.403222002600558]
Event Causality Extraction aims to extract cause-effect event causality pairs from plain texts.
We propose a method with a dual grid tagging scheme to capture the intra- and inter-event argument correlations for ECE.
arXiv Detail & Related papers (2023-01-27T09:48:31Z) - EA$^2$E: Improving Consistency with Event Awareness for Document-Level
Argument Extraction [52.43978926985928]
We introduce the Event-Aware Argument Extraction (EA$2$E) model with augmented context for training and inference.
Experiment results on WIKIEVENTS and ACE2005 datasets demonstrate the effectiveness of EA$2$E.
arXiv Detail & Related papers (2022-05-30T04:33:51Z) - ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer
for Event-Centric Generation and Classification [74.6318379374801]
We propose to pre-train a general Correlation-aware context-to-Event Transformer (ClarET) for event-centric reasoning.
The proposed ClarET is applicable to a wide range of event-centric reasoning scenarios.
arXiv Detail & Related papers (2022-03-04T10:11:15Z) - CLIP-Event: Connecting Text and Images with Event Structures [123.31452120399827]
We propose a contrastive learning framework to enforce vision-language pretraining models.
We take advantage of text information extraction technologies to obtain event structural knowledge.
Experiments show that our zero-shot CLIP-Event outperforms the state-of-the-art supervised model in argument extraction.
arXiv Detail & Related papers (2022-01-13T17:03:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.