Additive Angular Margin for Few Shot Learning to Classify Clinical
Endoscopy Images
- URL: http://arxiv.org/abs/2003.10033v2
- Date: Thu, 26 Mar 2020 20:28:04 GMT
- Title: Additive Angular Margin for Few Shot Learning to Classify Clinical
Endoscopy Images
- Authors: Sharib Ali, Binod Bhattarai, Tae-Kyun Kim, and Jens Rittscher
- Abstract summary: We propose a few-shot learning approach that requires less training data and can be used to predict label classes of test samples from an unseen dataset.
We compare our approach to the several established methods on a large cohort of multi-center, multi-organ, and multi-modal endoscopy data.
- Score: 42.74958357195011
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Endoscopy is a widely used imaging modality to diagnose and treat diseases in
hollow organs as for example the gastrointestinal tract, the kidney and the
liver. However, due to varied modalities and use of different imaging protocols
at various clinical centers impose significant challenges when generalising
deep learning models. Moreover, the assembly of large datasets from different
clinical centers can introduce a huge label bias that renders any learnt model
unusable. Also, when using new modality or presence of images with rare
patterns, a bulk amount of similar image data and their corresponding labels
are required for training these models. In this work, we propose to use a
few-shot learning approach that requires less training data and can be used to
predict label classes of test samples from an unseen dataset. We propose a
novel additive angular margin metric in the framework of prototypical network
in few-shot learning setting. We compare our approach to the several
established methods on a large cohort of multi-center, multi-organ, and
multi-modal endoscopy data. The proposed algorithm outperforms existing
state-of-the-art methods.
Related papers
- MedFMC: A Real-world Dataset and Benchmark For Foundation Model
Adaptation in Medical Image Classification [41.16626194300303]
Foundation models, often pre-trained with large-scale data, have achieved paramount success in jump-starting various vision and language applications.
Recent advances further enable adapting foundation models in downstream tasks efficiently using only a few training samples.
Yet, the application of such learning paradigms in medical image analysis remains scarce due to the shortage of publicly accessible data and benchmarks.
arXiv Detail & Related papers (2023-06-16T01:46:07Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Multimorbidity Content-Based Medical Image Retrieval Using Proxies [37.47987844057842]
We propose a novel multi-label metric learning method that can be used for both classification and content-based image retrieval.
Our model is able to support diagnosis by predicting the presence of diseases and provide evidence for these predictions.
We demonstrate the efficacy of our approach to both classification and content-based image retrieval on two multimorbidity radiology datasets.
arXiv Detail & Related papers (2022-11-22T11:23:53Z) - Generalized Multi-Task Learning from Substantially Unlabeled
Multi-Source Medical Image Data [11.061381376559053]
MultiMix is a new multi-task learning model that jointly learns disease classification and anatomical segmentation in a semi-supervised manner.
Our experiments with varying quantities of multi-source labeled data in the training sets confirm the effectiveness of MultiMix.
arXiv Detail & Related papers (2021-10-25T18:09:19Z) - Few-shot segmentation of medical images based on meta-learning with
implicit gradients [0.48861336570452174]
We propose to exploit an optimization-based implicit model agnostic meta-learning iMAML algorithm in a few-shot setting for medical image segmentation.
Our approach can leverage the learned weights from a diverse set of training samples and can be deployed on a new unseen dataset.
arXiv Detail & Related papers (2021-06-06T19:52:06Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.