Joint Learning-based Causal Relation Extraction from Biomedical
Literature
- URL: http://arxiv.org/abs/2208.01316v1
- Date: Tue, 2 Aug 2022 08:54:57 GMT
- Title: Joint Learning-based Causal Relation Extraction from Biomedical
Literature
- Authors: Dongling Li, Pengchao Wu, Yuehu Dong, Jinghang Gu, Longhua Qian,
Guodong Zhou
- Abstract summary: Experimental results on the BioCreative-V Track 4 corpus show that our joint learning model outperforms the separate models in BEL statement extraction.
This demonstrates that our joint learning system reaches the state-of-the-art performance in Stage 2 compared with other systems.
- Score: 15.92139180835277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal relation extraction of biomedical entities is one of the most complex
tasks in biomedical text mining, which involves two kinds of information:
entity relations and entity functions. One feasible approach is to take
relation extraction and function detection as two independent sub-tasks.
However, this separate learning method ignores the intrinsic correlation
between them and leads to unsatisfactory performance. In this paper, we propose
a joint learning model, which combines entity relation extraction and entity
function detection to exploit their commonality and capture their
inter-relationship, so as to improve the performance of biomedical causal
relation extraction. Meanwhile, during the model training stage, different
function types in the loss function are assigned different weights.
Specifically, the penalty coefficient for negative function instances increases
to effectively improve the precision of function detection. Experimental
results on the BioCreative-V Track 4 corpus show that our joint learning model
outperforms the separate models in BEL statement extraction, achieving the F1
scores of 58.4% and 37.3% on the test set in Stage 2 and Stage 1 evaluations,
respectively. This demonstrates that our joint learning system reaches the
state-of-the-art performance in Stage 2 compared with other systems.
Related papers
- Improving Entity Recognition Using Ensembles of Deep Learning and Fine-tuned Large Language Models: A Case Study on Adverse Event Extraction from Multiple Sources [13.750202656564907]
Adverse event (AE) extraction is crucial for monitoring and analyzing the safety profiles of immunizations.
This study aims to evaluate the effectiveness of large language models (LLMs) and traditional deep learning models in AE extraction.
arXiv Detail & Related papers (2024-06-26T03:56:21Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Extracting Protein-Protein Interactions (PPIs) from Biomedical
Literature using Attention-based Relational Context Information [5.456047952635665]
This work presents a unified, multi-source PPI corpora with vetted interaction definitions augmented by binary interaction type labels.
A Transformer-based deep learning method exploits entities' relational context information for relation representation to improve relation classification performance.
The model's performance is evaluated on four widely studied biomedical relation extraction datasets.
arXiv Detail & Related papers (2024-03-08T01:43:21Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - An Empirical Study on Relation Extraction in the Biomedical Domain [0.0]
We consider both sentence-level and document-level relation extraction, and run a few state-of-the-art methods on several benchmark datasets.
Our results show that (1) current document-level relation extraction methods have strong generalization ability; (2) existing methods require a large amount of labeled data for model fine-tuning in biomedicine.
arXiv Detail & Related papers (2021-12-11T03:36:38Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - A Trigger-Sense Memory Flow Framework for Joint Entity and Relation
Extraction [5.059120569845976]
We present a Trigger-Sense Memory Flow Framework (TriMF) for joint entity and relation extraction.
We build a memory module to remember category representations learned in entity recognition and relation extraction tasks.
We also design a multi-level memory flow attention mechanism to enhance the bi-directional interaction between entity recognition and relation extraction.
arXiv Detail & Related papers (2021-01-25T16:24:04Z) - A Co-Interactive Transformer for Joint Slot Filling and Intent Detection [61.109486326954205]
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system.
Previous studies either model the two tasks separately or only consider the single information flow from intent to slot.
We propose a Co-Interactive Transformer to consider the cross-impact between the two tasks simultaneously.
arXiv Detail & Related papers (2020-10-08T10:16:52Z) - Estimating Structural Target Functions using Machine Learning and
Influence Functions [103.47897241856603]
We propose a new framework for statistical machine learning of target functions arising as identifiable functionals from statistical models.
This framework is problem- and model-agnostic and can be used to estimate a broad variety of target parameters of interest in applied statistics.
We put particular focus on so-called coarsening at random/doubly robust problems with partially unobserved information.
arXiv Detail & Related papers (2020-08-14T16:48:29Z) - A logic-based relational learning approach to relation extraction: The
OntoILPER system [0.9176056742068812]
We present OntoILPER, a logic-based relational learning approach to Relation Extraction.
OntoILPER takes profit of a rich relational representation of examples, which can alleviate the drawbacks.
The proposed relational approach seems to be more suitable for Relation Extraction than statistical ones.
arXiv Detail & Related papers (2020-01-13T12:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.