Efficient Extraction of Pathologies from C-Spine Radiology Reports using
Multi-Task Learning
- URL: http://arxiv.org/abs/2204.04544v1
- Date: Sat, 9 Apr 2022 20:29:48 GMT
- Title: Efficient Extraction of Pathologies from C-Spine Radiology Reports using
Multi-Task Learning
- Authors: Arijit Sehanobish, Nathaniel Brown, Ishita Daga, Jayashri Pawar,
Danielle Torres, Anasuya Das, Murray Becker, Richard Herzog, Benjamin Odry,
Ron Vianu
- Abstract summary: We show that a multi-task model can beat or achieve the performance of multiple BERT-based models finetuned on various tasks.
We validate our method on our internal radiologist's report dataset on cervical spine.
- Score: 3.0473556982158625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pretrained Transformer based models finetuned on domain specific corpora have
changed the landscape of NLP. Generally, if one has multiple tasks on a given
dataset, one may finetune different models or use task specific adapters. In
this work, we show that a multi-task model can beat or achieve the performance
of multiple BERT-based models finetuned on various tasks and various task
specific adapter augmented BERT-based models. We validate our method on our
internal radiologist's report dataset on cervical spine. We hypothesize that
the tasks are semantically close and related and thus multitask learners are
powerful classifiers. Our work opens the scope of using our method to
radiologist's reports on various body parts.
Related papers
- A Multitask Deep Learning Model for Classification and Regression of Hyperspectral Images: Application to the large-scale dataset [44.94304541427113]
We propose a multitask deep learning model to perform multiple classification and regression tasks simultaneously on hyperspectral images.
We validated our approach on a large hyperspectral dataset called TAIGA.
A comprehensive qualitative and quantitative analysis of the results shows that the proposed method significantly outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-23T11:14:54Z) - Learning A Multi-Task Transformer Via Unified And Customized Instruction
Tuning For Chest Radiograph Interpretation [35.87795950781491]
We demonstrate a unified transformer model specifically designed for multi-modal clinical tasks by incorporating customized instruction tuning.
We first compose a multi-task training dataset comprising 13.4 million instruction and ground-truth pairs.
We can unify the various vision-intensive tasks in a single training framework with homogeneous model inputs and outputs to increase clinical interpretability in one reading.
arXiv Detail & Related papers (2023-11-02T08:55:48Z) - Task-Based MoE for Multitask Multilingual Machine Translation [58.20896429151824]
Mixture-of-experts (MoE) architecture has been proven a powerful method for diverse tasks in training deep models in many applications.
In this work, we design a novel method that incorporates task information into MoE models at different granular levels with shared dynamic task-based adapters.
arXiv Detail & Related papers (2023-08-30T05:41:29Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Explaining the Effectiveness of Multi-Task Learning for Efficient
Knowledge Extraction from Spine MRI Reports [2.5953185061765884]
We show that a single multi-tasking model can match the performance of task specific models.
We validate our observations on our internal radiologist-annotated datasets on the cervical and lumbar spine.
arXiv Detail & Related papers (2022-05-06T01:51:19Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation [80.18830380517753]
We develop a new task-agnostic distillation framework XtremeDistilTransformers.
We study the transferability of several source tasks, augmentation resources and model architecture for distillation.
arXiv Detail & Related papers (2021-06-08T17:49:33Z) - Multi-task Semi-supervised Learning for Pulmonary Lobe Segmentation [2.8016091833446617]
Pulmonary lobe segmentation is an important preprocessing task for the analysis of lung diseases.
Deep learning based methods can outperform these traditional approaches.
Deep multi-task learning is expected to utilize labels of multiple different structures.
arXiv Detail & Related papers (2021-04-22T12:33:30Z) - Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic
Conditional Random Fields [67.51177964010967]
We compare different models for low resource multi-task sequence tagging that leverage dependencies between label sequences for different tasks.
We find that explicit modeling of inter-dependencies between task predictions outperforms single-task as well as standard multi-task models.
arXiv Detail & Related papers (2020-05-01T07:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.