Towards foundation models and few-shot parameter-efficient fine-tuning
for volumetric organ segmentation
- URL: http://arxiv.org/abs/2303.17051v2
- Date: Fri, 29 Sep 2023 01:16:18 GMT
- Title: Towards foundation models and few-shot parameter-efficient fine-tuning
for volumetric organ segmentation
- Authors: Julio Silva-Rodr\'iguez, Jose Dolz and Ismail Ben Ayed
- Abstract summary: Few-shot efficient fine-tuning (FSEFT) is a novel and realistic setting for medical image segmentation.
We introduce a novel parameter-efficient fine-tuning strategy tailored to medical image segmentation.
Our comprehensive experiments on a collection of public CT datasets for organ segmentation point to the potential of vision adapters and transductive inference.
- Score: 21.588709922418765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the recent raise of foundation models in computer vision and NLP, the
pretrain-and-adapt strategy, where a large-scale model is fine-tuned on
downstream tasks, is gaining popularity. However, traditional fine-tuning
approaches may still require significant resources and yield sub-optimal
results when the labeled data of the target task is scarce. This is especially
the case in clinical settings. To address this challenge, we formalize few-shot
efficient fine-tuning (FSEFT), a novel and realistic setting for medical image
segmentation. Furthermore, we introduce a novel parameter-efficient fine-tuning
strategy tailored to medical image segmentation, with (a) spatial adapter
modules that are more appropriate for dense prediction tasks; and (b) a
constrained transductive inference, which leverages task-specific prior
knowledge. Our comprehensive experiments on a collection of public CT datasets
for organ segmentation reveal the limitations of standard fine-tuning methods
in few-shot scenarios, point to the potential of vision adapters and
transductive inference, and confirm the suitability of foundation models.
Related papers
- Orthogonal Projection Subspace to Aggregate Online Prior-knowledge for Continual Test-time Adaptation [67.80294336559574]
Continual Test Time Adaptation (CTTA) is a task that requires a source pre-trained model to continually adapt to new scenarios.<n>We propose a novel pipeline, Orthogonal Projection Subspace to aggregate online Prior-knowledge, dubbed OoPk.
arXiv Detail & Related papers (2025-06-23T18:17:39Z) - Parameter-Efficient Continual Fine-Tuning: A Survey [5.59258786465086]
We believe the next breakthrough in AI lies in enabling efficient adaptation to evolving environments.
One alternative to efficiently adapt these large-scale models is known.
Efficient Fine-Tuning (PEFT)
arXiv Detail & Related papers (2025-04-18T17:51:51Z) - FedEFM: Federated Endovascular Foundation Model with Unseen Data [11.320026809291239]
This paper proposes a new method to train a foundation model in a decentralized federated learning setting for endovascular intervention.
We tackle the unseen data issue using differentiable Earth Mover's Distance within a knowledge distillation framework.
Our approach achieves new state-of-the-art results, contributing to advancements in endovascular intervention and robotic-assisted surgery.
arXiv Detail & Related papers (2025-01-28T14:46:38Z) - Meta-Learning Adaptable Foundation Models [37.458141335750696]
We introduce a meta-learning framework infused with PEFT in this intermediate retraining stage to learn a model that can be easily adapted to unseen tasks.
In this setting, we demonstrate the suboptimality of standard retraining for finding an adaptable set of parameters.
We then apply these theoretical insights to retraining the RoBERTa model to predict the continuation of conversations within the ConvAI2 dataset.
arXiv Detail & Related papers (2024-10-29T17:24:18Z) - Day-Night Adaptation: An Innovative Source-free Adaptation Framework for Medical Image Segmentation [51.520294290813865]
We propose a novel adaptation framework called Day-Night Adaptation (DyNA) with insights.
During the day, a low-frequency prompt is trained to adapt the frozen model to each test sample.
During the night, we reuse test data collected from the day and introduce a global student model to bridge the knowledge between teacher and student models.
arXiv Detail & Related papers (2024-10-17T12:02:29Z) - Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation [10.746776960260297]
We propose a new continual whole-body organ segmentation model with light-weighted low-rank adaptation (LoRA)
We first train and freeze a pyramid vision transformer (PVT) base segmentation model on the initial task, then continually add light-weighted trainable LoRA parameters to the frozen model for each new learning task.
Our proposed model continually segments new organs without catastrophic forgetting and meanwhile maintaining a low parameter increasing rate.
arXiv Detail & Related papers (2024-10-07T02:00:13Z) - Few-Shot Airway-Tree Modeling using Data-Driven Sparse Priors [0.0]
Few-shot learning approaches are cost-effective to transfer pre-trained models using only limited annotated data.
We train a data-driven sparsification module to enhance airways efficiently in lung CT scans.
We then incorporate these sparse representations in a standard supervised segmentation pipeline as a pretraining step to enhance the performance of the DL models.
arXiv Detail & Related papers (2024-07-05T13:46:11Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - Meta Transfer of Self-Supervised Knowledge: Foundation Model in Action
for Post-Traumatic Epilepsy Prediction [0.6291443816903801]
We introduce a novel training strategy for our foundation model.
We demonstrate that the proposed strategy significantly improves task performance on small-scale clinical datasets.
Results further demonstrated the enhanced generalizability of our foundation model.
arXiv Detail & Related papers (2023-12-21T07:42:49Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Contextual Squeeze-and-Excitation for Efficient Few-Shot Image
Classification [57.36281142038042]
We present a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance.
We also present a new training protocol based on Coordinate-Descent called UpperCaSE that exploits meta-trained CaSE blocks and fine-tuning routines for efficient adaptation.
arXiv Detail & Related papers (2022-06-20T15:25:08Z) - Kronecker Factorization for Preventing Catastrophic Forgetting in
Large-scale Medical Entity Linking [7.723047334864811]
In the medical domain, sequential training on tasks may sometimes be the only way to train models.
catastrophic forgetting, i.e., a substantial drop in accuracy on prior tasks when a model is updated for a new task.
We show the effectiveness of this technique on the important and illustrative task of medical entity linking across three datasets.
arXiv Detail & Related papers (2021-11-11T01:51:01Z) - Transformer-Based Source-Free Domain Adaptation [134.67078085569017]
We study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation.
We propose a generic and effective framework based on Transformer, named TransDA, for learning a generalized model for SFDA.
arXiv Detail & Related papers (2021-05-28T23:06:26Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.