Rescue Implicit and Long-tail Cases: Nearest Neighbor Relation
Extraction
- URL: http://arxiv.org/abs/2210.11800v1
- Date: Fri, 21 Oct 2022 08:25:10 GMT
- Title: Rescue Implicit and Long-tail Cases: Nearest Neighbor Relation
Extraction
- Authors: Zhen Wan, Qianying Liu, Zhuoyuan Mao, Fei Cheng, Sadao Kurohashi,
Jiwei Li
- Abstract summary: We introduce a simple enhancement of Relation extraction using $k$ nearest neighbors ($k$NN-RE)
$k$NN-RE allows the model to consult training relations at test time through a nearest-neighbor search.
Experimental results show that the proposed $k$NN-RE achieves state-of-the-art performances on a variety of supervised RE datasets.
- Score: 36.85068584859058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relation extraction (RE) has achieved remarkable progress with the help of
pre-trained language models. However, existing RE models are usually incapable
of handling two situations: implicit expressions and long-tail relation types,
caused by language complexity and data sparsity. In this paper, we introduce a
simple enhancement of RE using $k$ nearest neighbors ($k$NN-RE). $k$NN-RE
allows the model to consult training relations at test time through a
nearest-neighbor search and provides a simple yet effective means to tackle the
two issues above. Additionally, we observe that $k$NN-RE serves as an effective
way to leverage distant supervision (DS) data for RE. Experimental results show
that the proposed $k$NN-RE achieves state-of-the-art performances on a variety
of supervised RE datasets, i.e., ACE05, SciERC, and Wiki80, along with
outperforming the best model to date on the i2b2 and Wiki80 datasets in the
setting of allowing using DS. Our code and models are available at:
https://github.com/YukinoWan/kNN-RE.
Related papers
- Great Memory, Shallow Reasoning: Limits of $k$NN-LMs [71.73611113995143]
$k$NN-LMs, which integrate retrieval with next-word prediction, have demonstrated strong performance in language modeling.
We ask whether this improved ability to recall information really translates into downstream abilities.
arXiv Detail & Related papers (2024-08-21T17:59:05Z) - Exploiting Pre-trained Models for Drug Target Affinity Prediction with Nearest Neighbors [58.661454334877256]
Drug-Target binding Affinity (DTA) prediction is essential for drug discovery.
Despite the application of deep learning methods to DTA prediction, the achieved accuracy remain suboptimal.
We propose $k$NN-DTA, a non-representation embedding-based retrieval method adopted on a pre-trained DTA prediction model.
arXiv Detail & Related papers (2024-07-21T15:49:05Z) - REST: Enhancing Group Robustness in DNNs through Reweighted Sparse
Training [49.581884130880944]
Deep neural network (DNN) has been proven effective in various domains.
However, they often struggle to perform well on certain minority groups during inference.
arXiv Detail & Related papers (2023-12-05T16:27:54Z) - Nearest Neighbor Search over Vectorized Lexico-Syntactic Patterns for
Relation Extraction from Financial Documents [1.068607542484439]
We introduce a simple approach that consults training relations at test time through a nearest-neighbor search over dense vectors of lexico-syntactic patterns.
We evaluate our approach on REFinD and show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-26T18:19:56Z) - Relational Extraction on Wikipedia Tables using Convolutional and Memory
Networks [6.200672130699805]
Relation extraction (RE) is the task of extracting relations between entities in text.
We introduce a new model consisting of Convolutional Neural Network (CNN) and Bidirectional-Long Short Term Memory (BiLSTM) network to encode entities.
arXiv Detail & Related papers (2023-07-11T22:36:47Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - You can't pick your neighbors, or can you? When and how to rely on
retrieval in the $k$NN-LM [65.74934004876914]
Retrieval-enhanced language models (LMs) condition their predictions on text retrieved from large external datastores.
One such approach, the $k$NN-LM, interpolates any existing LM's predictions with the output of a $k$-nearest neighbors model.
We empirically measure the effectiveness of our approach on two English language modeling datasets.
arXiv Detail & Related papers (2022-10-28T02:57:40Z) - Automatically Generating Counterfactuals for Relation Exaction [18.740447044960796]
relation extraction (RE) is a fundamental task in natural language processing.
Current deep neural models have achieved high accuracy but are easily affected by spurious correlations.
We develop a novel approach to derive contextual counterfactuals for entities.
arXiv Detail & Related papers (2022-02-22T04:46:10Z) - SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval [11.38022203865326]
SPLADE model provides highly sparse representations and competitive results with respect to state-of-the-art dense and sparse approaches.
We modify the pooling mechanism, benchmark a model solely based on document expansion, and introduce models trained with distillation.
Overall, SPLADE is considerably improved with more than $9$% gains on NDCG@10 on TREC DL 2019, leading to state-of-the-art results on the BEIR benchmark.
arXiv Detail & Related papers (2021-09-21T10:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.