Relation Extraction in underexplored biomedical domains: A
diversity-optimised sampling and synthetic data generation approach
- URL: http://arxiv.org/abs/2311.06364v1
- Date: Fri, 10 Nov 2023 19:36:00 GMT
- Title: Relation Extraction in underexplored biomedical domains: A
diversity-optimised sampling and synthetic data generation approach
- Authors: Maxime Delmas, Magdalena Wysocka, Andr\'e Freitas
- Abstract summary: sparsity of labelled data is an obstacle to the development of Relation Extraction models.
We create the first curated evaluation dataset and extracted literature items from the LOTUS database to build training sets.
We evaluate the performance of standard fine-tuning as a generative task and few-shot learning with open Large Language Models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The sparsity of labelled data is an obstacle to the development of Relation
Extraction models and the completion of databases in various biomedical areas.
While being of high interest in drug-discovery, the natural-products
literature, reporting the identification of potential bioactive compounds from
organisms, is a concrete example of such an overlooked topic. To mark the start
of this new task, we created the first curated evaluation dataset and extracted
literature items from the LOTUS database to build training sets. To this end,
we developed a new sampler inspired by diversity metrics in ecology, named
Greedy Maximum Entropy sampler, or GME-sampler
(https://github.com/idiap/gme-sampler). The strategic optimization of both
balance and diversity of the selected items in the evaluation set is important
given the resource-intensive nature of manual curation. After quantifying the
noise in the training set, in the form of discrepancies between the input
abstracts text and the expected output labels, we explored different strategies
accordingly. Framing the task as an end-to-end Relation Extraction, we
evaluated the performance of standard fine-tuning as a generative task and
few-shot learning with open Large Language Models (LLaMA 7B-65B). In addition
to their evaluation in few-shot settings, we explore the potential of open
Large Language Models (Vicuna-13B) as synthetic data generator and propose a
new workflow for this purpose. All evaluated models exhibited substantial
improvements when fine-tuned on synthetic abstracts rather than the original
noisy data. We provide our best performing (f1-score=59.0) BioGPT-Large model
for end-to-end RE of natural-products relationships along with all the
generated synthetic data and the evaluation dataset. See more details at
https://github.com/idiap/abroad-re.
Related papers
- NeuroSym-BioCAT: Leveraging Neuro-Symbolic Methods for Biomedical Scholarly Document Categorization and Question Answering [0.14999444543328289]
We introduce a novel approach that integrates an optimized topic modelling framework, OVB-LDA, with the BI-POP CMA-ES optimization technique for enhanced scholarly document abstract categorization.
We employ the distilled MiniLM model, fine-tuned on domain-specific data, for high-precision answer extraction.
arXiv Detail & Related papers (2024-10-29T14:45:12Z) - Generating Realistic Tabular Data with Large Language Models [49.03536886067729]
Large language models (LLM) have been used for diverse tasks, but do not capture the correct correlation between the features and the target variable.
We propose a LLM-based method with three important improvements to correctly capture the ground-truth feature-class correlation in the real data.
Our experiments show that our method significantly outperforms 10 SOTA baselines on 20 datasets in downstream tasks.
arXiv Detail & Related papers (2024-10-29T04:14:32Z) - Dataset Distillation for Histopathology Image Classification [46.04496989951066]
We introduce a novel dataset distillation algorithm tailored for histopathology image datasets (Histo-DD)
We conduct a comprehensive evaluation of the effectiveness of the proposed algorithm and the generated histopathology samples in both patch-level and slide-level classification tasks.
arXiv Detail & Related papers (2024-08-19T05:53:38Z) - ACLSum: A New Dataset for Aspect-based Summarization of Scientific
Publications [10.529898520273063]
ACLSum is a novel summarization dataset carefully crafted and evaluated by domain experts.
In contrast to previous datasets, ACLSum facilitates multi-aspect summarization of scientific papers.
arXiv Detail & Related papers (2024-03-08T13:32:01Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - BioREx: Improving Biomedical Relation Extraction by Leveraging
Heterogeneous Datasets [7.7587371896752595]
Biomedical relation extraction (RE) is a central task in biomedical natural language processing (NLP) research.
We present a novel framework for systematically addressing the data heterogeneity of individual datasets and combining them into a large dataset.
Our evaluation shows that BioREx achieves significantly higher performance than the benchmark system trained on the individual dataset.
arXiv Detail & Related papers (2023-06-19T22:48:18Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Unsupervised Opinion Summarization with Content Planning [58.5308638148329]
We show that explicitly incorporating content planning in a summarization model yields output of higher quality.
We also create synthetic datasets which are more natural, resembling real world document-summary pairs.
Our approach outperforms competitive models in generating informative, coherent, and fluent summaries.
arXiv Detail & Related papers (2020-12-14T18:41:58Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.