Sectioning of Biomedical Abstracts: A Sequence of Sequence
Classification Task
- URL: http://arxiv.org/abs/2201.07112v1
- Date: Tue, 18 Jan 2022 16:41:13 GMT
- Title: Sectioning of Biomedical Abstracts: A Sequence of Sequence
Classification Task
- Authors: Mehmet Efruz Karabulut, K. Vijay-Shanker
- Abstract summary: We study a state-of-the-art deep learning model, which we called SSN-4 model here.
We explore how well this model generalizes to a new data set beyond Randomized Controlled Trials (RCT) dataset.
Results show that SSN-4 model does not appear to generalize well beyond RCT dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rapid growth of the biomedical literature has led to many advances in the
biomedical text mining field. Among the vast amount of information, biomedical
article abstracts are the easily accessible sources. However, the number of the
structured abstracts, describing the rhetorical sections with one of
Background, Objective, Method, Result and Conclusion categories is still not
considerable. Exploration of valuable information in the biomedical abstracts
can be expedited with the improvements in the sequential sentence
classification task. Deep learning based models has great performance/potential
in achieving significant results in this task. However, they can often be
overly complex and overfit to specific data. In this project, we study a
state-of-the-art deep learning model, which we called SSN-4 model here. We
investigate different components of the SSN-4 model to study the trade-off
between the performance and complexity. We explore how well this model
generalizes to a new data set beyond Randomized Controlled Trials (RCT)
dataset. We address the question that whether word embeddings can be adjusted
to the task to improve the performance. Furthermore, we develop a second model
that addresses the confusion pairs in the first model. Results show that SSN-4
model does not appear to generalize well beyond RCT dataset.
Related papers
- A Benchmark for End-to-End Zero-Shot Biomedical Relation Extraction with LLMs: Experiments with OpenAI Models [7.923208324118286]
We study patterns in the performance of OpenAI LLMs across a diverse sampling of biomedical relation extraction tasks.
We found the zero-shot performances to be proximal to that of fine-tuned methods.
arXiv Detail & Related papers (2025-04-05T07:08:54Z) - Parameter Efficient Fine-Tuning of Segment Anything Model [2.6579756198224347]
Vision foundation models, such as Segment Anything Model (SAM), address this issue through broad segmentation capabilities.
We provide an implementation of QLoRA for vision transformers and a new approach for resource-efficient finetuning of SAM.
arXiv Detail & Related papers (2025-02-01T12:39:17Z) - Knowledge Hierarchy Guided Biological-Medical Dataset Distillation for Domain LLM Training [10.701353329227722]
We propose a framework that automates the distillation of high-quality textual training data from the extensive scientific literature.
Our approach self-evaluates and generates questions that are more closely aligned with the biomedical domain.
Our approach substantially improves question-answering tasks compared to pre-trained models from the life sciences domain.
arXiv Detail & Related papers (2025-01-25T07:20:44Z) - NeuroSym-BioCAT: Leveraging Neuro-Symbolic Methods for Biomedical Scholarly Document Categorization and Question Answering [0.14999444543328289]
We introduce a novel approach that integrates an optimized topic modelling framework, OVB-LDA, with the BI-POP CMA-ES optimization technique for enhanced scholarly document abstract categorization.
We employ the distilled MiniLM model, fine-tuned on domain-specific data, for high-precision answer extraction.
arXiv Detail & Related papers (2024-10-29T14:45:12Z) - PathInsight: Instruction Tuning of Multimodal Datasets and Models for Intelligence Assisted Diagnosis in Histopathology [7.87900104748629]
We have meticulously compiled a dataset of approximately 45,000 cases, covering over 6 different tasks.
We have fine-tuned multimodal large models, specifically LLaVA, Qwen-VL, InternLM, with this dataset to enhance instruction-based performance.
arXiv Detail & Related papers (2024-08-13T17:05:06Z) - Universal and Extensible Language-Vision Models for Organ Segmentation and Tumor Detection from Abdominal Computed Tomography [50.08496922659307]
We propose a universal framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes.
Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models.
Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors.
arXiv Detail & Related papers (2024-05-28T16:55:15Z) - BioREx: Improving Biomedical Relation Extraction by Leveraging
Heterogeneous Datasets [7.7587371896752595]
Biomedical relation extraction (RE) is a central task in biomedical natural language processing (NLP) research.
We present a novel framework for systematically addressing the data heterogeneity of individual datasets and combining them into a large dataset.
Our evaluation shows that BioREx achieves significantly higher performance than the benchmark system trained on the individual dataset.
arXiv Detail & Related papers (2023-06-19T22:48:18Z) - Competence-based Multimodal Curriculum Learning for Medical Report
Generation [98.10763792453925]
We propose a Competence-based Multimodal Curriculum Learning framework ( CMCL) to alleviate the data bias and make best use of available data.
Specifically, CMCL simulates the learning process of radiologists and optimize the model in a step by step manner.
Experiments on the public IU-Xray and MIMIC-CXR datasets show that CMCL can be incorporated into existing models to improve their performance.
arXiv Detail & Related papers (2022-06-24T08:16:01Z) - One Model is All You Need: Multi-Task Learning Enables Simultaneous
Histology Image Segmentation and Classification [3.8725005247905386]
We present a multi-task learning approach for segmentation and classification of tissue regions.
We enable simultaneous prediction with a single network.
As a result of feature sharing, we also show that the learned representation can be used to improve downstream tasks.
arXiv Detail & Related papers (2022-02-28T20:22:39Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - A Systematic Approach to Featurization for Cancer Drug Sensitivity
Predictions with Deep Learning [49.86828302591469]
We train >35,000 neural network models, sweeping over common featurization techniques.
We found the RNA-seq to be highly redundant and informative even with subsets larger than 128 features.
arXiv Detail & Related papers (2020-04-30T20:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.