Dynamically Updating Event Representations for Temporal Relation
Classification with Multi-category Learning
- URL: http://arxiv.org/abs/2310.20236v1
- Date: Tue, 31 Oct 2023 07:41:24 GMT
- Title: Dynamically Updating Event Representations for Temporal Relation
Classification with Multi-category Learning
- Authors: Fei Cheng, Masayuki Asahara, Ichiro Kobayashi, Sadao Kurohashi
- Abstract summary: Temporal relation classification is a pair-wise task for identifying the relation of a temporal link (Tlink) between two mentions.
This paper presents an event centric model that allows to manage dynamic event representations across multiple Tlink categories.
Our proposal outperforms state-of-the-art models and two transfer learning baselines on both the English and Japanese data.
- Score: 35.27714529976667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal relation classification is a pair-wise task for identifying the
relation of a temporal link (TLINK) between two mentions, i.e. event, time, and
document creation time (DCT). It leads to two crucial limits: 1) Two TLINKs
involving a common mention do not share information. 2) Existing models with
independent classifiers for each TLINK category (E2E, E2T, and E2D) hinder from
using the whole data. This paper presents an event centric model that allows to
manage dynamic event representations across multiple TLINKs. Our model deals
with three TLINK categories with multi-task learning to leverage the full size
of data. The experimental results show that our proposal outperforms
state-of-the-art models and two transfer learning baselines on both the English
and Japanese data.
Related papers
- Event GDR: Event-Centric Generative Document Retrieval [37.53593254200252]
We propose Event GDR, an event-centric generative document retrieval model.
We employ events and relations to model the document to guarantee the comprehensiveness and inner-content correlation.
For identifier construction, we map the events to well-defined event taxonomy to construct the identifiers with explicit semantic structure.
arXiv Detail & Related papers (2024-05-11T02:55:11Z) - More than Classification: A Unified Framework for Event Temporal
Relation Extraction [61.44799147458621]
Event temporal relation extraction(ETRE) is usually formulated as a multi-label classification task.
We observe that all relations can be interpreted using the start and end time points of events.
We propose a unified event temporal relation extraction framework, which transforms temporal relations into logical expressions of time points.
arXiv Detail & Related papers (2023-05-28T02:09:08Z) - Unified Visual Relationship Detection with Vision and Language Models [89.77838890788638]
This work focuses on training a single visual relationship detector predicting over the union of label spaces from multiple datasets.
We propose UniVRD, a novel bottom-up method for Unified Visual Relationship Detection by leveraging vision and language models.
Empirical results on both human-object interaction detection and scene-graph generation demonstrate the competitive performance of our model.
arXiv Detail & Related papers (2023-03-16T00:06:28Z) - Co-Occurrence Matters: Learning Action Relation for Temporal Action
Localization [41.44022912961265]
We propose a novel Co-Occurrence Relation Module (CORM) that explicitly models the co-occurrence relationship between actions.
Besides the visual information, it further utilizes the semantic embeddings of class labels to model the co-occurrence relationship.
Our method achieves high multi-label relationship modeling capacity.
arXiv Detail & Related papers (2023-03-15T09:07:04Z) - MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference,
Temporal, Causal, and Subevent Relation Extraction [78.61546292830081]
We construct a large-scale human-annotated ERE dataset MAVEN-ERE with improved annotation schemes.
It contains 103,193 event coreference chains, 1,216,217 temporal relations, 57,992 causal relations, and 15,841 subevent relations.
Experiments show that ERE on MAVEN-ERE is quite challenging, and considering relation interactions with joint learning can improve performances.
arXiv Detail & Related papers (2022-11-14T13:34:49Z) - RAAT: Relation-Augmented Attention Transformer for Relation Modeling in
Document-Level Event Extraction [16.87868728956481]
We propose a new DEE framework which can model the relation dependencies, called Relation-augmented Document-level Event Extraction (ReDEE)
To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance.
arXiv Detail & Related papers (2022-06-07T15:11:42Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Predicting Event Time by Classifying Sub-Level Temporal Relations
Induced from a Unified Representation of Time Anchors [10.67457147373144]
We propose an effective method to decompose complex temporal relations into sub-level relations.
Our approach outperforms the state-of-the-art decision tree model.
arXiv Detail & Related papers (2020-08-14T16:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.