Task-Aware Active Learning for Endoscopic Image Analysis
- URL: http://arxiv.org/abs/2204.03440v1
- Date: Thu, 7 Apr 2022 13:36:45 GMT
- Title: Task-Aware Active Learning for Endoscopic Image Analysis
- Authors: Shrawan Kumar Thapa, Pranav Poudel, Binod Bhattarai, Danail Stoyanov
- Abstract summary: We investigate an active learning paradigm to reduce the number of training examples.
We propose a novel task-aware active learning pipeline and applied for two important tasks in endoscopic image analysis.
- Score: 18.230148396607625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation of polyps and depth estimation are two important
research problems in endoscopic image analysis. One of the main obstacles to
conduct research on these research problems is lack of annotated data.
Endoscopic annotations necessitate the specialist knowledge of expert
endoscopists and due to this, it can be difficult to organise, expensive and
time consuming. To address this problem, we investigate an active learning
paradigm to reduce the number of training examples by selecting the most
discriminative and diverse unlabelled examples for the task taken into
consideration. Most of the existing active learning pipelines are task-agnostic
in nature and are often sub-optimal to the end task. In this paper, we propose
a novel task-aware active learning pipeline and applied for two important tasks
in endoscopic image analysis: semantic segmentation and depth estimation. We
compared our method with the competitive baselines. From the experimental
results, we observe a substantial improvement over the compared baselines.
Codes are available at https://github.com/thetna/endo-active-learn.
Related papers
- Guidelines for Cerebrovascular Segmentation: Managing Imperfect Annotations in the context of Semi-Supervised Learning [3.231698506153459]
Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data.
Such labels are typically highly time-consuming, error-prone and expensive to produce.
Semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled.
arXiv Detail & Related papers (2024-04-02T09:31:06Z) - GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation [5.049466204159458]
Self-supervised learning (SSL) is an alternative paradigm that provides some respite by constructing models utilizing only the unannotated data.
In this paper, we propose an SSL approach for segmenting histopathological images via generative diffusion models.
Our method is based on the observation that diffusion models effectively solve an image-to-image translation task akin to a segmentation task.
arXiv Detail & Related papers (2023-09-04T09:49:24Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Understanding and Mitigating Overfitting in Prompt Tuning for
Vision-Language Models [108.13378788663196]
We propose Subspace Prompt Tuning (SubPT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process.
We equip CoOp with Novel Learner Feature (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set.
arXiv Detail & Related papers (2022-11-04T02:06:22Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Unified Representation Learning for Efficient Medical Image Analysis [0.623075162128532]
We propose a multi-task training approach for medical image analysis using a unified modality-specific feature representation (UMS-Rep)
Our results demonstrate that the proposed approach reduces the overall demand for computational resources and improves target task generalization and performance.
arXiv Detail & Related papers (2020-06-19T16:52:16Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.