EMBRE: Entity-aware Masking for Biomedical Relation Extraction
- URL: http://arxiv.org/abs/2401.07877v1
- Date: Mon, 15 Jan 2024 18:12:01 GMT
- Title: EMBRE: Entity-aware Masking for Biomedical Relation Extraction
- Authors: Mingjie Li and Karin Verspoor
- Abstract summary: We introduce the Entity-aware Masking for Biomedical Relation Extraction (EMBRE) method for relation extraction.
Specifically, we integrate entity knowledge into a deep neural network by pretraining the backbone model with an entity masking objective.
- Score: 12.821610050561256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Information extraction techniques, including named entity recognition (NER)
and relation extraction (RE), are crucial in many domains to support making
sense of vast amounts of unstructured text data by identifying and connecting
relevant information. Such techniques can assist researchers in extracting
valuable insights. In this paper, we introduce the Entity-aware Masking for
Biomedical Relation Extraction (EMBRE) method for biomedical relation
extraction, as applied in the context of the BioRED challenge Task 1, in which
human-annotated entities are provided as input. Specifically, we integrate
entity knowledge into a deep neural network by pretraining the backbone model
with an entity masking objective. We randomly mask named entities for each
instance and let the model identify the masked entity along with its type. In
this way, the model is capable of learning more specific knowledge and more
robust representations. Then, we utilize the pre-trained model as our backbone
to encode language representations and feed these representations into two
multilayer perceptron (MLPs) to predict the logits for relation and novelty,
respectively. The experimental results demonstrate that our proposed method can
improve the performances of entity pair, relation and novelty extraction over
our baseline.
Related papers
- BioMNER: A Dataset for Biomedical Method Entity Recognition [25.403593761614424]
We propose a novel dataset for biomedical method entity recognition.
We employ an automated BioMethod entity recognition and information retrieval system to assist human annotation.
Our empirical findings reveal that the large parameter counts of language models surprisingly inhibit the effective assimilation of entity extraction patterns.
arXiv Detail & Related papers (2024-06-28T16:34:24Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Multi-level biomedical NER through multi-granularity embeddings and
enhanced labeling [3.8599767910528917]
This paper proposes a hybrid approach that integrates the strengths of multiple models.
BERT provides contextualized word embeddings, a pre-trained multi-channel CNN for character-level information capture, and following by a BiLSTM + CRF for sequence labelling and modelling dependencies between the words in the text.
We evaluate our model on the benchmark i2b2/2010 dataset, achieving an F1-score of 90.11.
arXiv Detail & Related papers (2023-12-24T21:45:36Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for
Biomedical Entity Recognition [4.865221751784403]
This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.
Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks.
arXiv Detail & Related papers (2023-07-20T18:08:34Z) - Nested Named Entity Recognition from Medical Texts: An Adaptive Shared
Network Architecture with Attentive CRF [53.55504611255664]
We propose a novel method, referred to as ASAC, to solve the dilemma caused by the nested phenomenon.
The proposed method contains two key modules: the adaptive shared (AS) part and the attentive conditional random field (ACRF) module.
Our model could learn better entity representations by capturing the implicit distinctions and relationships between different categories of entities.
arXiv Detail & Related papers (2022-11-09T09:23:56Z) - Improving Biomedical Pretrained Language Models with Knowledge [22.61591249168801]
We propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases.
Specifically, we extract entities from PubMed abstracts and link them to UMLS.
We then train a knowledge-aware language model that firstly applies a text-only encoding layer to learn entity representation and applies a text-entity fusion encoding to aggregate entity representation.
arXiv Detail & Related papers (2021-04-21T03:57:26Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Panoptic Feature Fusion Net: A Novel Instance Segmentation Paradigm for
Biomedical and Biological Images [91.41909587856104]
We present a Panoptic Feature Fusion Net (PFFNet) that unifies the semantic and instance features in this work.
Our proposed PFFNet contains a residual attention feature fusion mechanism to incorporate the instance prediction with the semantic features.
It outperforms several state-of-the-art methods on various biomedical and biological datasets.
arXiv Detail & Related papers (2020-02-15T09:19:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.