The SourceData-NLP dataset: integrating curation into scientific
publishing for training large language models
- URL: http://arxiv.org/abs/2310.20440v1
- Date: Tue, 31 Oct 2023 13:22:38 GMT
- Title: The SourceData-NLP dataset: integrating curation into scientific
publishing for training large language models
- Authors: Jorge Abreu-Vicente, Hannah Sonntag, Thomas Eidens, Thomas Lemberger
- Abstract summary: We present the SourceData-NLP dataset produced through the routine curation of papers during the publication process.
This dataset contains more than 620,000 annotated biomedical entities, curated from 18,689 figures in 3,223 papers in molecular and cell biology.
- Score: 1.0423199374671421
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Introduction: The scientific publishing landscape is expanding rapidly,
creating challenges for researchers to stay up-to-date with the evolution of
the literature. Natural Language Processing (NLP) has emerged as a potent
approach to automating knowledge extraction from this vast amount of
publications and preprints. Tasks such as Named-Entity Recognition (NER) and
Named-Entity Linking (NEL), in conjunction with context-dependent semantic
interpretation, offer promising and complementary approaches to extracting
structured information and revealing key concepts.
Results: We present the SourceData-NLP dataset produced through the routine
curation of papers during the publication process. A unique feature of this
dataset is its emphasis on the annotation of bioentities in figure legends. We
annotate eight classes of biomedical entities (small molecules, gene products,
subcellular components, cell lines, cell types, tissues, organisms, and
diseases), their role in the experimental design, and the nature of the
experimental method as an additional class. SourceData-NLP contains more than
620,000 annotated biomedical entities, curated from 18,689 figures in 3,223
papers in molecular and cell biology. We illustrate the dataset's usefulness by
assessing BioLinkBERT and PubmedBERT, two transformers-based models, fine-tuned
on the SourceData-NLP dataset for NER. We also introduce a novel
context-dependent semantic task that infers whether an entity is the target of
a controlled intervention or the object of measurement.
Conclusions: SourceData-NLP's scale highlights the value of integrating
curation into publishing. Models trained with SourceData-NLP will furthermore
enable the development of tools able to extract causal hypotheses from the
literature and assemble them into knowledge graphs.
Related papers
- Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning [0.03214166687856062]
We introduce a scalable subfigure extraction pipeline based on transformer-based object detection.<n>We release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset.<n>We show improved performance across retrieval, zero-shot classification, and robustness benchmarks.
arXiv Detail & Related papers (2025-06-03T10:53:19Z) - BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature [73.39593644054865]
BIOMEDICA is a scalable, open-source framework to extract, annotate, and serialize the entirety of the PubMed Central Open Access subset into an easy-to-use, publicly accessible dataset.
Our framework produces a comprehensive archive with over 24 million unique image-text pairs from over 6 million articles.
BMCA-CLIP is a suite of CLIP-style models continuously pretrained on the BIOMEDICA dataset via streaming, eliminating the need to download 27 TB of data locally.
arXiv Detail & Related papers (2025-01-13T09:58:03Z) - MRGen: Segmentation Data Engine for Underrepresented MRI Modalities [59.61465292965639]
Training medical image segmentation models for rare yet clinically important imaging modalities is challenging due to the scarcity of annotated data.<n>This paper investigates leveraging generative models to synthesize data, for training segmentation models for underrepresented modalities.<n>We present MRGen, a data engine for controllable medical image synthesis conditioned on text prompts and segmentation masks.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - SciER: An Entity and Relation Extraction Dataset for Datasets, Methods, and Tasks in Scientific Documents [49.54155332262579]
We release a new entity and relation extraction dataset for entities related to datasets, methods, and tasks in scientific articles.
Our dataset contains 106 manually annotated full-text scientific publications with over 24k entities and 12k relations.
arXiv Detail & Related papers (2024-10-28T15:56:49Z) - MMSci: A Dataset for Graduate-Level Multi-Discipline Multimodal Scientific Understanding [59.41495657570397]
We present a comprehensive dataset compiled from Nature Communications articles covering 72 scientific fields.
We evaluated 19 proprietary and open-source models on two benchmark tasks, figure captioning and multiple-choice, and conducted human expert annotation.
Fine-tuning Qwen2-VL-7B with our task-specific data achieved better performance than GPT-4o and even human experts in multiple-choice evaluations.
arXiv Detail & Related papers (2024-07-06T00:40:53Z) - SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature [80.49349719239584]
We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following demonstrations for 54 tasks.
SciRIFF is the first dataset focused on extracting and synthesizing information from research literature across a wide range of scientific fields.
arXiv Detail & Related papers (2024-06-10T21:22:08Z) - BiomedParse: a biomedical foundation model for image parsing of everything everywhere all at once [58.41069132627823]
holistic image analysis comprises subtasks such as segmentation, detection, and recognition of relevant objects.
Here, we propose BiomedParse, a biomedical foundation model for imaging parsing that can jointly conduct segmentation, detection, and recognition for 82 object types across 9 imaging modalities.
Through joint learning, we can improve accuracy for individual tasks and enable novel applications such as segmenting all relevant objects in a noisy image through a text prompt.
arXiv Detail & Related papers (2024-05-21T17:54:06Z) - Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - Leveraging Biomolecule and Natural Language through Multi-Modal
Learning: A Survey [75.47055414002571]
The integration of biomolecular modeling with natural language (BL) has emerged as a promising interdisciplinary area at the intersection of artificial intelligence, chemistry and biology.
We provide an analysis of recent advancements achieved through cross modeling of biomolecules and natural language.
arXiv Detail & Related papers (2024-03-03T14:59:47Z) - Exploring the Effectiveness of Instruction Tuning in Biomedical Language
Processing [19.41164870575055]
This study investigates the potential of instruction tuning for biomedical language processing.
We present a comprehensive, instruction-based model trained on a dataset that consists of approximately $200,000$ instruction-focused samples.
arXiv Detail & Related papers (2023-12-31T20:02:10Z) - Improving Biomedical Abstractive Summarisation with Knowledge
Aggregation from Citation Papers [24.481854035628434]
Existing language models struggle to generate technical summaries that are on par with those produced by biomedical experts.
We propose a novel attention-based citation aggregation model that integrates domain-specific knowledge from citation papers.
Our model outperforms state-of-the-art approaches and achieves substantial improvements in abstractive biomedical text summarisation.
arXiv Detail & Related papers (2023-10-24T09:56:46Z) - UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for
Biomedical Entity Recognition [4.865221751784403]
This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.
Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks.
arXiv Detail & Related papers (2023-07-20T18:08:34Z) - Generating More Pertinent Captions by Leveraging Semantics and Style on
Multi-Source Datasets [56.018551958004814]
This paper addresses the task of generating fluent descriptions by training on a non-uniform combination of data sources.
Large-scale datasets with noisy image-text pairs provide a sub-optimal source of supervision.
We propose to leverage and separate semantics and descriptive style through the incorporation of a style token and keywords extracted through a retrieval component.
arXiv Detail & Related papers (2021-11-24T19:00:05Z) - PharmKE: Knowledge Extraction Platform for Pharmaceutical Texts using
Transfer Learning [0.0]
PharmKE is a text analysis platform that applies deep learning through several stages for thorough semantic analysis of pharmaceutical articles.
The methodology is used to create accurately labeled training and test datasets, which are then used to train models for custom entity labeling tasks.
The obtained results are compared to the fine-tuned BERT and BioBERT models trained on the same dataset.
arXiv Detail & Related papers (2021-02-25T19:36:35Z) - Data Mining in Clinical Trial Text: Transformers for Classification and
Question Answering Tasks [2.127049691404299]
This research applies advances in natural language processing to evidence synthesis based on medical texts.
The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework.
Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks.
arXiv Detail & Related papers (2020-01-30T11:45:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.