Pairwise Representation Learning for Event Coreference
- URL: http://arxiv.org/abs/2010.12808v2
- Date: Thu, 20 Jan 2022 19:40:58 GMT
- Title: Pairwise Representation Learning for Event Coreference
- Authors: Xiaodong Yu, Wenpeng Yin, Dan Roth
- Abstract summary: We develop a Pairwise Representation Learning (PairwiseRL) scheme for the event mention pairs.
Our representation supports a finer, structured representation of the text snippet to facilitate encoding events and their arguments.
We show that PairwiseRL, despite its simplicity, outperforms the prior state-of-the-art event coreference systems on both cross-document and within-document event coreference benchmarks.
- Score: 73.10563168692667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural Language Processing tasks such as resolving the coreference of events
require understanding the relations between two text snippets. These tasks are
typically formulated as (binary) classification problems over independently
induced representations of the text snippets. In this work, we develop a
Pairwise Representation Learning (PairwiseRL) scheme for the event mention
pairs, in which we jointly encode a pair of text snippets so that the
representation of each mention in the pair is induced in the context of the
other one. Furthermore, our representation supports a finer, structured
representation of the text snippet to facilitate encoding events and their
arguments. We show that PairwiseRL, despite its simplicity, outperforms the
prior state-of-the-art event coreference systems on both cross-document and
within-document event coreference benchmarks. We also conduct in-depth analysis
in terms of the improvement and the limitation of pairwise representation so as
to provide insights for future work.
Related papers
- Contextual Document Embeddings [77.22328616983417]
We propose two complementary methods for contextualized document embeddings.
First, an alternative contrastive learning objective that explicitly incorporates the document neighbors into the intra-batch contextual loss.
Second, a new contextual architecture that explicitly encodes neighbor document information into the encoded representation.
arXiv Detail & Related papers (2024-10-03T14:33:34Z) - Enhancing Document-level Event Argument Extraction with Contextual Clues
and Role Relevance [12.239459451494872]
Document-level event argument extraction poses new challenges of long input and cross-sentence inference.
We propose a Span-trigger-based Contextual Pooling and latent Role Guidance model.
arXiv Detail & Related papers (2023-10-08T11:29:10Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z) - Improving Multi-task Generalization Ability for Neural Text Matching via
Prompt Learning [54.66399120084227]
Recent state-of-the-art neural text matching models (PLMs) are hard to generalize to different tasks.
We adopt a specialization-generalization training strategy and refer to it as Match-Prompt.
In specialization stage, descriptions of different matching tasks are mapped to only a few prompt tokens.
In generalization stage, text matching model explores the essential matching signals by being trained on diverse multiple matching tasks.
arXiv Detail & Related papers (2022-04-06T11:01:08Z) - Analysis of Joint Speech-Text Embeddings for Semantic Matching [3.6423306784901235]
We study a joint speech-text embedding space trained for semantic matching by minimizing the distance between paired utterance and transcription inputs.
We extend our method to incorporate automatic speech recognition through both pretraining and multitask scenarios.
arXiv Detail & Related papers (2022-04-04T04:50:32Z) - Coherence-Based Distributed Document Representation Learning for
Scientific Documents [9.646001537050925]
We propose a coupled text pair embedding (CTPE) model to learn the representation of scientific documents.
We use negative sampling to construct uncoupled text pairs whose two parts are from different documents.
We train the model to judge whether the text pair is coupled or uncoupled and use the obtained embedding of coupled text pairs as the embedding of documents.
arXiv Detail & Related papers (2022-01-08T15:29:21Z) - Capturing Event Argument Interaction via A Bi-Directional Entity-Level
Recurrent Decoder [7.60457018063735]
We formalize event argument extraction (EAE) as a Seq2Seq-like learning problem for the first time.
A neural architecture with a novel Bi-directional Entity-level Recurrent Decoder (BERD) is proposed to generate argument roles.
arXiv Detail & Related papers (2021-07-01T02:55:12Z) - Conversational Semantic Parsing [50.954321571100294]
Session-based properties such as co-reference resolution and context carryover are processed downstream in a pipelined system.
We release a new session-based, compositional task-oriented parsing dataset of 20k sessions consisting of 60k utterances.
We propose a new family of Seq2Seq models for the session-based parsing above, which achieve better or comparable performance to the current state-of-the-art on ATIS, SNIPS, TOP and DSTC2.
arXiv Detail & Related papers (2020-09-28T22:08:00Z) - Consensus-Aware Visual-Semantic Embedding for Image-Text Matching [69.34076386926984]
Image-text matching plays a central role in bridging vision and language.
Most existing approaches only rely on the image-text instance pair to learn their representations.
We propose a Consensus-aware Visual-Semantic Embedding model to incorporate the consensus information.
arXiv Detail & Related papers (2020-07-17T10:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.