Filling in the Gaps: Efficient Event Coreference Resolution using Graph
Autoencoder Networks
- URL: http://arxiv.org/abs/2310.11965v1
- Date: Wed, 18 Oct 2023 13:44:58 GMT
- Title: Filling in the Gaps: Efficient Event Coreference Resolution using Graph
Autoencoder Networks
- Authors: Loic De Langhe, Orph\'ee De Clercq, Veronique Hoste
- Abstract summary: We introduce a novel and efficient method for Event Coreference Resolution (ECR) applied to a lower-resourced language domain.
By framing ECR as a graph reconstruction task, we are able to combine deep semantic embeddings with structural coreference chain knowledge.
Our method significantly outperforms classical mention-pair methods on a large Dutch event coreference corpus.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel and efficient method for Event Coreference Resolution
(ECR) applied to a lower-resourced language domain. By framing ECR as a graph
reconstruction task, we are able to combine deep semantic embeddings with
structural coreference chain knowledge to create a parameter-efficient family
of Graph Autoencoder models (GAE). Our method significantly outperforms
classical mention-pair methods on a large Dutch event coreference corpus in
terms of overall score, efficiency and training speed. Additionally, we show
that our models are consistently able to classify more difficult coreference
links and are far more robust in low-data settings when compared to
transformer-based mention-pair coreference algorithms.
Related papers
- Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Network Alignment with Transferable Graph Autoencoders [79.89704126746204]
We propose a novel graph autoencoder architecture designed to extract powerful and robust node embeddings.
We prove that the generated embeddings are associated with the eigenvalues and eigenvectors of the graphs.
Our proposed framework also leverages transfer learning and data augmentation to achieve efficient network alignment at a very large scale without retraining.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - CascadER: Cross-Modal Cascading for Knowledge Graph Link Prediction [22.96768147978534]
We propose a tiered ranking architecture CascadER to maintain the ranking accuracy of full ensembling while improving efficiency considerably.
CascadER uses LMs to rerank the outputs of more efficient base KGEs, relying on an adaptive subset selection scheme aimed at invoking the LMs minimally while maximizing accuracy gain over the KGE.
Our empirical analyses reveal that diversity of models across modalities and preservation of individual models' confidence signals help explain the effectiveness of CascadER.
arXiv Detail & Related papers (2022-05-16T22:55:45Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - Event-Driven News Stream Clustering using Entity-Aware Contextual
Embeddings [14.225334321146779]
We propose a method for online news stream clustering that is a variant of the non-parametric streaming K-means algorithm.
Our model uses a combination of sparse and dense document representations, aggregates document-cluster similarity along these multiple representations.
We show that the use of a suitable fine-tuning objective and external knowledge in pre-trained transformer models yields significant improvements in the effectiveness of contextual embeddings.
arXiv Detail & Related papers (2021-01-26T19:58:30Z) - SCG-Net: Self-Constructing Graph Neural Networks for Semantic
Segmentation [23.623276007011373]
We propose a module that learns a long-range dependency graph directly from the image and uses it to propagate contextual information efficiently.
The module is optimised via a novel adaptive diagonal enhancement method and a variational lower bound.
When incorporated into a neural network (SCG-Net), semantic segmentation is performed in an end-to-end manner and competitive performance.
arXiv Detail & Related papers (2020-09-03T12:13:09Z) - Active Learning for Coreference Resolution using Discrete Annotation [76.36423696634584]
We improve upon pairwise annotation for active learning in coreference resolution.
We ask annotators to identify mention antecedents if a presented mention pair is deemed not coreferent.
In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour.
arXiv Detail & Related papers (2020-04-28T17:17:11Z) - Self-Constructing Graph Convolutional Networks for Semantic Labeling [23.623276007011373]
We propose a novel architecture called the Self-Constructing Graph (SCG), which makes use of learnable latent variables to generate embeddings.
SCG can automatically obtain optimized non-local context graphs from complex-shaped objects in aerial imagery.
We demonstrate the effectiveness and flexibility of the proposed SCG on the publicly available ISPRS Vaihingen dataset.
arXiv Detail & Related papers (2020-03-15T21:55:24Z) - ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system
description) [0.4893345190925177]
We describe an implementation of gradient boosting and neural guidance of saturation-style automated theorem provers.
For the gradient-boosting guidance, we manually create abstracted features by considering arity-based encodings of formulas.
For the neural guidance, we use symbol-independent graph neural networks (GNNs) and their embedding of the terms and clauses.
arXiv Detail & Related papers (2020-02-13T09:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.