Advancing Relation Extraction through Language Probing with Exemplars
from Set Co-Expansion
- URL: http://arxiv.org/abs/2308.11720v1
- Date: Fri, 18 Aug 2023 00:56:35 GMT
- Title: Advancing Relation Extraction through Language Probing with Exemplars
from Set Co-Expansion
- Authors: Yerong Li, Roxana Girju
- Abstract summary: Relation Extraction (RE) is a pivotal task in automatically extracting structured information from unstructured text.
We present a multi-faceted approach that integrates representative examples and through co-set expansion.
Our method achieves an observed margin of at least 1 percent improvement in accuracy in most settings.
- Score: 1.450405446885067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relation Extraction (RE) is a pivotal task in automatically extracting
structured information from unstructured text. In this paper, we present a
multi-faceted approach that integrates representative examples and through
co-set expansion. The primary goal of our method is to enhance relation
classification accuracy and mitigating confusion between contrastive classes.
Our approach begins by seeding each relationship class with representative
examples. Subsequently, our co-set expansion algorithm enriches training
objectives by incorporating similarity measures between target pairs and
representative pairs from the target class. Moreover, the co-set expansion
process involves a class ranking procedure that takes into account exemplars
from contrastive classes. Contextual details encompassing relation mentions are
harnessed via context-free Hearst patterns to ascertain contextual similarity.
Empirical evaluation demonstrates the efficacy of our co-set expansion
approach, resulting in a significant enhancement of relation classification
performance. Our method achieves an observed margin of at least 1 percent
improvement in accuracy in most settings, on top of existing fine-tuning
approaches. To further refine our approach, we conduct an in-depth analysis
that focuses on tuning contrastive examples. This strategic selection and
tuning effectively reduce confusion between classes sharing similarities,
leading to a more precise classification process.
Experimental results underscore the effectiveness of our proposed framework
for relation extraction. The synergy between co-set expansion and context-aware
prompt tuning substantially contributes to improved classification accuracy.
Furthermore, the reduction in confusion between contrastive classes through
contrastive examples tuning validates the robustness and reliability of our
method.
Related papers
- Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence [51.54175067684008]
This paper introduces a Transformer-based integrative feature and cost aggregation network designed for dense matching tasks.
We first show that feature aggregation and cost aggregation exhibit distinct characteristics and reveal the potential for substantial benefits stemming from the judicious use of both aggregation processes.
Our framework is evaluated on standard benchmarks for semantic matching, and also applied to geometric matching, where we show that our approach achieves significant improvements compared to existing methods.
arXiv Detail & Related papers (2024-03-17T07:02:55Z) - Synergistic Anchored Contrastive Pre-training for Few-Shot Relation
Extraction [4.7220779071424985]
Few-shot Relation Extraction (FSRE) aims to extract facts from a sparse set of labeled corpora.
Recent studies have shown promising results in FSRE by employing Pre-trained Language Models.
We introduce a novel synergistic anchored contrastive pre-training framework.
arXiv Detail & Related papers (2023-12-19T10:16:24Z) - Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking [38.666592866591344]
Multi-scenario ad ranking aims at leveraging the data from multiple domains or channels for training a unified ranking model.
We propose a Hybrid Contrastive Constrained approach (HC2) for multi-scenario ad ranking.
arXiv Detail & Related papers (2023-02-06T09:15:39Z) - FECANet: Boosting Few-Shot Semantic Segmentation with Feature-Enhanced
Context-Aware Network [48.912196729711624]
Few-shot semantic segmentation is the task of learning to locate each pixel of a novel class in a query image with only a few annotated support images.
We propose a Feature-Enhanced Context-Aware Network (FECANet) to suppress the matching noise caused by inter-class local similarity.
In addition, we propose a novel correlation reconstruction module that encodes extra correspondence relations between foreground and background and multi-scale context semantic features.
arXiv Detail & Related papers (2023-01-19T16:31:13Z) - FineDiving: A Fine-grained Dataset for Procedure-aware Action Quality
Assessment [93.09267863425492]
We argue that understanding both high-level semantics and internal temporal structures of actions in competitive sports videos is the key to making predictions accurate and interpretable.
We construct a new fine-grained dataset, called FineDiving, developed on diverse diving events with detailed annotations on action procedures.
arXiv Detail & Related papers (2022-04-07T17:59:32Z) - Revisiting Contrastive Learning through the Lens of Neighborhood
Component Analysis: an Integrated Framework [70.84906094606072]
We show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks.
With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the adversarial accuracy.
arXiv Detail & Related papers (2021-12-08T18:54:11Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - An end-to-end approach for the verification problem: learning the right
distance [15.553424028461885]
We augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder.
We first show it approximates a likelihood ratio which can be used for hypothesis tests.
We observe training is much simplified under the proposed approach compared to metric learning with actual distances.
arXiv Detail & Related papers (2020-02-21T18:46:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.