Entangled Relations: Leveraging NLI and Meta-analysis to Enhance Biomedical Relation Extraction
- URL: http://arxiv.org/abs/2406.00226v1
- Date: Fri, 31 May 2024 23:05:04 GMT
- Title: Entangled Relations: Leveraging NLI and Meta-analysis to Enhance Biomedical Relation Extraction
- Authors: William Hogan, Jingbo Shang,
- Abstract summary: We introduce MetaEntail-RE, a novel adaptation method that harnesses NLI principles to enhance relation extraction.
Our approach follows past works by verbalizing relation classes into class-indicative hypotheses.
Our experimental results underscore the versatility of MetaEntail-RE, demonstrating performance gains across both biomedical and general domains.
- Score: 35.320291731292286
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research efforts have explored the potential of leveraging natural language inference (NLI) techniques to enhance relation extraction (RE). In this vein, we introduce MetaEntail-RE, a novel adaptation method that harnesses NLI principles to enhance RE performance. Our approach follows past works by verbalizing relation classes into class-indicative hypotheses, aligning a traditionally multi-class classification task to one of textual entailment. We introduce three key enhancements: (1) Instead of labeling non-entailed premise-hypothesis pairs with the uninformative "neutral" entailment label, we introduce meta-class analysis, which provides additional context by analyzing overarching meta relationships between classes when assigning entailment labels; (2) Feasible hypothesis filtering, which removes unlikely hypotheses from consideration based on pairs of entity types; and (3) Group-based prediction selection, which further improves performance by selecting highly confident predictions. MetaEntail-RE is conceptually simple and empirically powerful, yielding significant improvements over conventional relation extraction techniques and other NLI formulations. Our experimental results underscore the versatility of MetaEntail-RE, demonstrating performance gains across both biomedical and general domains.
Related papers
- ASTE Transformer Modelling Dependencies in Aspect-Sentiment Triplet Extraction [2.07180164747172]
Aspect-Sentiment Triplet Extraction (ASTE) is a recently proposed task that consists in extracting (aspect phrase, opinion phrase, sentiment polarity) triples from a given sentence.
Recent state-of-the-art methods approach this task by first extracting all possible spans from a given sentence.
arXiv Detail & Related papers (2024-09-23T16:49:47Z) - Graph Stochastic Neural Process for Inductive Few-shot Knowledge Graph Completion [63.68647582680998]
We focus on a task called inductive few-shot knowledge graph completion (I-FKGC)
Inspired by the idea of inductive reasoning, we cast I-FKGC as an inductive reasoning problem.
We present a neural process-based hypothesis extractor that models the joint distribution of hypothesis, from which we can sample a hypothesis for predictions.
In the second module, based on the hypothesis, we propose a graph attention-based predictor to test if the triple in the query set aligns with the extracted hypothesis.
arXiv Detail & Related papers (2024-08-03T13:37:40Z) - Improving Unsupervised Relation Extraction by Augmenting Diverse
Sentence Pairs [15.87963432758696]
Un-sentence relation extraction (URE) aims to extract relations between named entities from raw text.
We propose AugURE with both within-sentence pairs augmentation and augmentation through cross-sentence pairs extraction.
Experiments on NYT-FB and TACRED datasets demonstrate that the proposed relation representation learning and a simple K-Means clustering achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-12-01T12:59:32Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - No Pairs Left Behind: Improving Metric Learning with Regularized Triplet
Objective [19.32706951298244]
We propose a novel formulation of the triplet objective function that improves metric learning without additional sample mining or overhead costs.
We show that our method (called No Pairs Left Behind [NPLB]) improves upon the traditional and current state-of-the-art triplet objective formulations.
arXiv Detail & Related papers (2022-10-18T00:56:01Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Self-Supervised Detection of Contextual Synonyms in a Multi-Class
Setting: Phenotype Annotation Use Case [11.912581294872767]
Contextualised word embeddings is a powerful tool to detect contextual synonyms.
We propose a self-supervised pre-training approach which is able to detect contextual synonyms of concepts being training on the data created by shallow matching.
arXiv Detail & Related papers (2021-09-04T21:35:01Z) - Introducing Syntactic Structures into Target Opinion Word Extraction
with Deep Learning [89.64620296557177]
We propose to incorporate the syntactic structures of the sentences into the deep learning models for targeted opinion word extraction.
We also introduce a novel regularization technique to improve the performance of the deep learning models.
The proposed model is extensively analyzed and achieves the state-of-the-art performance on four benchmark datasets.
arXiv Detail & Related papers (2020-10-26T07:13:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.