Task Fingerprinting for Meta Learning in Biomedical Image Analysis
- URL: http://arxiv.org/abs/2107.03949v1
- Date: Thu, 8 Jul 2021 16:20:28 GMT
- Title: Task Fingerprinting for Meta Learning in Biomedical Image Analysis
- Authors: Patrick Godau and Lena Maier-Hein
- Abstract summary: Shortage of annotated data is one of the greatest bottlenecks in biomedical image analysis.
In this paper, we address the problem of quantifying task similarity with a concept that we refer to as task fingerprinting.
The concept involves converting a given task, represented by imaging data and corresponding labels, to a fixed-length vector representation.
- Score: 0.6685643907416715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shortage of annotated data is one of the greatest bottlenecks in biomedical
image analysis. Meta learning studies how learning systems can increase in
efficiency through experience and could thus evolve as an important concept to
overcome data sparsity. However, the core capability of meta learning-based
approaches is the identification of similar previous tasks given a new task - a
challenge largely unexplored in the biomedical imaging domain. In this paper,
we address the problem of quantifying task similarity with a concept that we
refer to as task fingerprinting. The concept involves converting a given task,
represented by imaging data and corresponding labels, to a fixed-length vector
representation. In fingerprint space, different tasks can be directly compared
irrespective of their data set sizes, types of labels or specific resolutions.
An initial feasibility study in the field of surgical data science (SDS) with
26 classification tasks from various medical and non-medical domains suggests
that task fingerprinting could be leveraged for both (1) selecting appropriate
data sets for pretraining and (2) selecting appropriate architectures for a new
task. Task fingerprinting could thus become an important tool for meta learning
in SDS and other fields of biomedical image analysis.
Related papers
- Self-Supervised Learning for Medical Image Data with Anatomy-Oriented Imaging Planes [28.57933404578436]
We propose two complementary pretext tasks for medical image data.
The first is to learn the relative orientation between the imaging planes and implemented as regressing their intersecting lines.
The second exploits parallel imaging planes to regress their relative slice locations within a stack.
arXiv Detail & Related papers (2024-03-25T07:34:06Z) - YOLO-MED : Multi-Task Interaction Network for Biomedical Images [18.535117490442953]
YOLO-Med is an efficient end-to-end multi-task network capable of concurrently performing object detection and semantic segmentation.
Our model exhibits promising results in balancing accuracy and speed when evaluated on the Kvasir-seg dataset and a private biomedical image dataset.
arXiv Detail & Related papers (2024-03-01T03:20:42Z) - Source Identification: A Self-Supervision Task for Dense Prediction [8.744460886823322]
We propose a new self-supervision task called source identification (SI)
Synthetic images are generated by fusing multiple source images and the network's task is to reconstruct the original images, given the fused images.
We validate our method on two medical image segmentation tasks: brain tumor segmentation and white matter hyperintensities segmentation.
arXiv Detail & Related papers (2023-07-05T12:27:58Z) - MulGT: Multi-task Graph-Transformer with Task-aware Knowledge Injection
and Domain Knowledge-driven Pooling for Whole Slide Image Analysis [17.098951643252345]
Whole slide image (WSI) has been widely used to assist automated diagnosis under the deep learning fields.
We present a novel multi-task framework (i.e., MulGT) for WSI analysis by the specially designed Graph-Transformer.
arXiv Detail & Related papers (2023-02-21T10:00:58Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Leveraging Human Selective Attention for Medical Image Analysis with
Limited Training Data [72.1187887376849]
The selective attention mechanism helps the cognition system focus on task-relevant visual clues by ignoring the presence of distractors.
We propose a framework to leverage gaze for medical image analysis tasks with small training data.
Our method is demonstrated to achieve superior performance on both 3D tumor segmentation and 2D chest X-ray classification tasks.
arXiv Detail & Related papers (2021-12-02T07:55:25Z) - Domain Generalization on Medical Imaging Classification using Episodic
Training with Task Augmentation [62.49837463676111]
We propose a novel scheme of episodic training with task augmentation on medical imaging classification.
Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting.
arXiv Detail & Related papers (2021-06-13T03:56:59Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z) - Unified Representation Learning for Efficient Medical Image Analysis [0.623075162128532]
We propose a multi-task training approach for medical image analysis using a unified modality-specific feature representation (UMS-Rep)
Our results demonstrate that the proposed approach reduces the overall demand for computational resources and improves target task generalization and performance.
arXiv Detail & Related papers (2020-06-19T16:52:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.