End-to-End Neural Event Coreference Resolution
- URL: http://arxiv.org/abs/2009.08153v1
- Date: Thu, 17 Sep 2020 09:00:59 GMT
- Title: End-to-End Neural Event Coreference Resolution
- Authors: Yaojie Lu and Hongyu Lin and Jialong Tang and Xianpei Han and Le Sun
- Abstract summary: We propose an End-to-End Event Coreference approach -- E3C neural network.
Our method achieves new state-of-the-art performance on two standard datasets.
- Score: 41.377231614857614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional event coreference systems usually rely on pipeline framework and
hand-crafted features, which often face error propagation problem and have poor
generalization ability. In this paper, we propose an End-to-End Event
Coreference approach -- E3C neural network, which can jointly model event
detection and event coreference resolution tasks, and learn to extract features
from raw text automatically. Furthermore, because event mentions are highly
diversified and event coreference is intricately governed by long-distance,
semantic-dependent decisions, a type-guided event coreference mechanism is
further proposed in our E3C neural network. Experiments show that our method
achieves new state-of-the-art performance on two standard datasets.
Related papers
- Double Mixture: Towards Continual Event Detection from Speech [60.33088725100812]
Speech event detection is crucial for multimedia retrieval, involving the tagging of both semantic and acoustic events.
This paper tackles two primary challenges in speech event detection: the continual integration of new events without forgetting previous ones, and the disentanglement of semantic from acoustic events.
We propose a novel method, 'Double Mixture,' which merges speech expertise with robust memory mechanisms to enhance adaptability and prevent forgetting.
arXiv Detail & Related papers (2024-04-20T06:32:00Z) - MambaPupil: Bidirectional Selective Recurrent model for Event-based Eye tracking [50.26836546224782]
Event-based eye tracking has shown great promise with the high temporal resolution and low redundancy.
The diversity and abruptness of eye movement patterns, including blinking, fixating, saccades, and smooth pursuit, pose significant challenges for eye localization.
This paper proposes a bidirectional long-term sequence modeling and time-varying state selection mechanism to fully utilize contextual temporal information.
arXiv Detail & Related papers (2024-04-18T11:09:25Z) - EventBind: Learning a Unified Representation to Bind Them All for Event-based Open-world Understanding [7.797154022794006]
EventBind is a novel framework that unleashes the potential of vision-language models (VLMs) for event-based recognition.
We first introduce a novel event encoder that subtly models the temporal information from events.
We then design a text encoder that generates content prompts and utilizes hybrid text prompts to enhance EventBind's generalization ability.
arXiv Detail & Related papers (2023-08-06T15:05:42Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - Event Transformer [43.193463048148374]
Event camera's low power consumption and ability to capture microsecond brightness make it attractive for various computer vision tasks.
Existing event representation methods typically convert events into frames, voxel grids, or spikes for deep neural networks (DNNs)
This work introduces a novel token-based event representation, where each event is considered a fundamental processing unit termed an event-token.
arXiv Detail & Related papers (2022-04-11T15:05:06Z) - PILED: An Identify-and-Localize Framework for Few-Shot Event Detection [79.66042333016478]
In our study, we employ cloze prompts to elicit event-related knowledge from pretrained language models.
We minimize the number of type-specific parameters, enabling our model to quickly adapt to event detection tasks for new types.
arXiv Detail & Related papers (2022-02-15T18:01:39Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - Event-LSTM: An Unsupervised and Asynchronous Learning-based
Representation for Event-based Data [8.931153235278831]
Event cameras are activity-driven bio-inspired vision sensors.
We propose Event-LSTM, an unsupervised Auto-Encoder architecture made up of LSTM layers.
We also push state-of-the-art event de-noising forward by introducing memory into the de-noising process.
arXiv Detail & Related papers (2021-05-10T09:18:52Z) - Within-Document Event Coreference with BERT-Based Contextualized Representations [2.3020018305241337]
Event coreference continues to be a challenging problem in information extraction.
Recent advances in contextualized language representations have proven successful in many tasks.
We present a three part approach that uses representations derived from a pretrained BERT model to train a neural classifier to create coreference chains.
arXiv Detail & Related papers (2021-02-15T21:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.