Only One Relation Possible? Modeling the Ambiguity in Event Temporal Relation Extraction
- URL: http://arxiv.org/abs/2408.07353v1
- Date: Wed, 14 Aug 2024 07:57:51 GMT
- Title: Only One Relation Possible? Modeling the Ambiguity in Event Temporal Relation Extraction
- Authors: Yutong Hu, Quzhe Huang, Yansong Feng,
- Abstract summary: Event Temporal Relation Extraction (ETRE) aims to identify the temporal relationship between two events.
We propose a multi-label classification solution for ETRE (METRE) to infer the possibility of each temporal relation independently.
Our method can effectively utilize the textitVague instances to improve the recognition for specific temporal relations.
- Score: 30.319025749352246
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event Temporal Relation Extraction (ETRE) aims to identify the temporal relationship between two events, which plays an important role in natural language understanding. Most previous works follow a single-label classification style, classifying an event pair into either a specific temporal relation (e.g., \textit{Before}, \textit{After}), or a special label \textit{Vague} when there may be multiple possible temporal relations between the pair. In our work, instead of directly making predictions on \textit{Vague}, we propose a multi-label classification solution for ETRE (METRE) to infer the possibility of each temporal relation independently, where we treat \textit{Vague} as the cases when there is more than one possible relation between two events. We design a speculation mechanism to explore the possible relations hidden behind \textit{Vague}, which enables the latent information to be used efficiently. Experiments on TB-Dense, MATRES and UDS-T show that our method can effectively utilize the \textit{Vague} instances to improve the recognition for specific temporal relations and outperforms most state-of-the-art methods.
Related papers
- More than Classification: A Unified Framework for Event Temporal
Relation Extraction [61.44799147458621]
Event temporal relation extraction(ETRE) is usually formulated as a multi-label classification task.
We observe that all relations can be interpreted using the start and end time points of events.
We propose a unified event temporal relation extraction framework, which transforms temporal relations into logical expressions of time points.
arXiv Detail & Related papers (2023-05-28T02:09:08Z) - MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference,
Temporal, Causal, and Subevent Relation Extraction [78.61546292830081]
We construct a large-scale human-annotated ERE dataset MAVEN-ERE with improved annotation schemes.
It contains 103,193 event coreference chains, 1,216,217 temporal relations, 57,992 causal relations, and 15,841 subevent relations.
Experiments show that ERE on MAVEN-ERE is quite challenging, and considering relation interactions with joint learning can improve performances.
arXiv Detail & Related papers (2022-11-14T13:34:49Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - RAAT: Relation-Augmented Attention Transformer for Relation Modeling in
Document-Level Event Extraction [16.87868728956481]
We propose a new DEE framework which can model the relation dependencies, called Relation-augmented Document-level Event Extraction (ReDEE)
To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance.
arXiv Detail & Related papers (2022-06-07T15:11:42Z) - Dynamic Relation Discovery and Utilization in Multi-Entity Time Series
Forecasting [92.32415130188046]
In many real-world scenarios, there could exist crucial yet implicit relation between entities.
We propose an attentional multi-graph neural network with automatic graph learning (A2GNN) in this work.
arXiv Detail & Related papers (2022-02-18T11:37:04Z) - SERC: Syntactic and Semantic Sequence based Event Relation
Classification [2.922007656878633]
We propose a model that incorporates both temporal and causal features to perform causal relation classification.
We use the syntactic structure of the text for identifying temporal and causal relations between two events from the text.
We propose an LSTM based model for temporal and causal relation classification that captures the interrelations between the three encoded features.
arXiv Detail & Related papers (2021-11-03T14:58:52Z) - Extracting Temporal Event Relation with Syntactic-Guided Temporal Graph
Transformer [17.850316385809617]
We propose a new Temporal Graph Transformer network to explicitly find the connection between two events from a syntactic graph constructed from one or two continuous sentences.
Experiments on MATRES and TB-Dense datasets show that our approach significantly outperforms previous state-of-the-art methods on both end-to-end temporal relation extraction and temporal relation classification.
arXiv Detail & Related papers (2021-04-19T19:00:45Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Predicting Event Time by Classifying Sub-Level Temporal Relations
Induced from a Unified Representation of Time Anchors [10.67457147373144]
We propose an effective method to decompose complex temporal relations into sub-level relations.
Our approach outperforms the state-of-the-art decision tree model.
arXiv Detail & Related papers (2020-08-14T16:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.