SLPT: Selective Labeling Meets Prompt Tuning on Label-Limited Lesion
Segmentation
- URL: http://arxiv.org/abs/2308.04911v1
- Date: Wed, 9 Aug 2023 12:22:49 GMT
- Title: SLPT: Selective Labeling Meets Prompt Tuning on Label-Limited Lesion
Segmentation
- Authors: Fan Bai, Ke Yan, Xiaoyu Bai, Xinyu Mao, Xiaoli Yin, Jingren Zhou, Yu
Shi, Le Lu, Max Q.-H. Meng
- Abstract summary: We propose a framework that combines selective labeling with prompt tuning to boost performance in limited labels.
We evaluate our method on liver tumor segmentation and achieve state-of-the-art performance, outperforming traditional fine-tuning with only 6% of tunable parameters.
- Score: 57.37875162629063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image analysis using deep learning is often challenged by limited
labeled data and high annotation costs. Fine-tuning the entire network in
label-limited scenarios can lead to overfitting and suboptimal performance.
Recently, prompt tuning has emerged as a more promising technique that
introduces a few additional tunable parameters as prompts to a task-agnostic
pre-trained model, and updates only these parameters using supervision from
limited labeled data while keeping the pre-trained model unchanged. However,
previous work has overlooked the importance of selective labeling in downstream
tasks, which aims to select the most valuable downstream samples for annotation
to achieve the best performance with minimum annotation cost. To address this,
we propose a framework that combines selective labeling with prompt tuning
(SLPT) to boost performance in limited labels. Specifically, we introduce a
feature-aware prompt updater to guide prompt tuning and a TandEm Selective
LAbeling (TESLA) strategy. TESLA includes unsupervised diversity selection and
supervised selection using prompt-based uncertainty. In addition, we propose a
diversified visual prompt tuning strategy to provide multi-prompt-based
discrepant predictions for TESLA. We evaluate our method on liver tumor
segmentation and achieve state-of-the-art performance, outperforming
traditional fine-tuning with only 6% of tunable parameters, also achieving 94%
of full-data performance by labeling only 5% of the data.
Related papers
- Enhancing Zero-Shot Vision Models by Label-Free Prompt Distribution Learning and Bias Correcting [55.361337202198925]
Vision-language models, such as CLIP, have shown impressive generalization capacities when using appropriate text descriptions.
We propose a label-Free prompt distribution learning and bias correction framework, dubbed as **Frolic**, which boosts zero-shot performance without the need for labeled data.
arXiv Detail & Related papers (2024-10-25T04:00:45Z) - Selective Fine-tuning on LLM-labeled Data May Reduce Reliance on Human Annotation: A Case Study Using Schedule-of-Event Table Detection [2.238930812771604]
We fine-tune PaLM-2 with parameter efficient fine-tuning (PEFT) using noisy labels obtained from gemini-pro 1.0.
We show that fine-tuned PaLM-2 with those labels achieves performance that exceeds the gemini-pro 1.0 and other LLMs.
arXiv Detail & Related papers (2024-05-09T20:45:58Z) - ActiveDC: Distribution Calibration for Active Finetuning [36.64444238742072]
We propose a new method called ActiveDC for the active finetuning tasks.
We calibrate the distribution of the selected samples by exploiting implicit category information in the unlabeled pool.
The results indicate that ActiveDC consistently outperforms the baseline performance in all image classification tasks.
arXiv Detail & Related papers (2023-11-13T14:35:18Z) - Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse
Finetuning [24.765911297156855]
FISH-DIP is a sample-aware dynamic sparse finetuning strategy that selectively focuses on a fraction of parameters.
We demonstrate that FISH-DIP can smoothly optimize the model in low resource settings offering upto 40% performance improvements.
arXiv Detail & Related papers (2023-11-07T06:19:37Z) - IDEAL: Influence-Driven Selective Annotations Empower In-Context
Learners in Large Language Models [66.32043210237768]
This paper introduces an influence-driven selective annotation method.
It aims to minimize annotation costs while improving the quality of in-context examples.
Experiments confirm the superiority of the proposed method on various benchmarks.
arXiv Detail & Related papers (2023-10-16T22:53:54Z) - Active Finetuning: Exploiting Annotation Budget in the
Pretraining-Finetuning Paradigm [132.9949120482274]
This paper focuses on the selection of samples for annotation in the pretraining-finetuning paradigm.
We propose a novel method called ActiveFT for active finetuning task to select a subset of data distributing similarly with the entire unlabeled pool.
Extensive experiments show the leading performance and high efficiency of ActiveFT superior to baselines on both image classification and semantic segmentation.
arXiv Detail & Related papers (2023-03-25T07:17:03Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Partial sequence labeling with structured Gaussian Processes [8.239028141030621]
We propose structured Gaussian Processes for partial sequence labeling.
It encodes uncertainty in the prediction and does not need extra effort for model selection and hyper parameter learning.
It is evaluated on several sequence labeling tasks and the experimental results show the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-20T00:56:49Z) - Dash: Semi-Supervised Learning with Dynamic Thresholding [72.74339790209531]
We propose a semi-supervised learning (SSL) approach that uses unlabeled examples to train models.
Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection.
arXiv Detail & Related papers (2021-09-01T23:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.