Order-Guided Disentangled Representation Learning for Ulcerative Colitis
Classification with Limited Labels
- URL: http://arxiv.org/abs/2111.03815v1
- Date: Sat, 6 Nov 2021 06:53:40 GMT
- Title: Order-Guided Disentangled Representation Learning for Ulcerative Colitis
Classification with Limited Labels
- Authors: Shota Harada, Ryoma Bise, Hideaki Hayashi, Kiyohito Tanaka, and
Seiichi Uchida
- Abstract summary: We propose a practical semi-supervised learning method for Ulcerative colitis (UC) classification.
The proposed method can extract the essential information of UC classification efficiently by a disentanglement process.
Experimental results demonstrate that the proposed method outperforms several existing semi-supervised learning methods in the classification task.
- Score: 8.302375673936387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ulcerative colitis (UC) classification, which is an important task for
endoscopic diagnosis, involves two main difficulties. First, endoscopic images
with the annotation about UC (positive or negative) are usually limited.
Second, they show a large variability in their appearance due to the location
in the colon. Especially, the second difficulty prevents us from using existing
semi-supervised learning techniques, which are the common remedy for the first
difficulty. In this paper, we propose a practical semi-supervised learning
method for UC classification by newly exploiting two additional features, the
location in a colon (e.g., left colon) and image capturing order, both of which
are often attached to individual images in endoscopic image sequences. The
proposed method can extract the essential information of UC classification
efficiently by a disentanglement process with those features. Experimental
results demonstrate that the proposed method outperforms several existing
semi-supervised learning methods in the classification task, even with a small
number of annotated images.
Related papers
- Ordinal Multiple-instance Learning for Ulcerative Colitis Severity Estimation with Selective Aggregated Transformer [4.2875024530011085]
We propose a patient-level severity estimation method by a transformer with selective aggregator tokens.
Our method can effectively aggregate features of severe parts from a set of images captured in each patient.
Experiments demonstrate the effectiveness of the proposed method on two datasets compared with the state-of-the-art MIL methods.
arXiv Detail & Related papers (2024-11-22T06:11:35Z) - Classification of Breast Cancer Histopathology Images using a Modified Supervised Contrastive Learning Method [4.303291247305105]
We improve the supervised contrastive learning method by leveraging both image-level labels and domain-specific augmentations to enhance model robustness.
We evaluate our method on the BreakHis dataset, which consists of breast cancer histopathology images.
This improvement corresponds to 93.63% absolute accuracy, highlighting the effectiveness of our approach in leveraging properties of data to learn more appropriate representation space.
arXiv Detail & Related papers (2024-05-06T17:06:11Z) - Towards Robust Natural-Looking Mammography Lesion Synthesis on
Ipsilateral Dual-Views Breast Cancer Analysis [1.1098503592431275]
Two major issues of mammogram classification tasks are leveraging multi-view mammographic information and class-imbalance handling.
We propose a simple but novel method for enhancing examined view (main view) by leveraging low-level feature information from the auxiliary view.
We also propose a simple but novel malignant mammogram synthesis framework for up synthesizing minor class samples.
arXiv Detail & Related papers (2023-09-07T06:33:30Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - A Semi-Supervised Classification Method of Apicomplexan Parasites and
Host Cell Using Contrastive Learning Strategy [6.677163460963862]
This paper proposes a semi-supervised classification method for three kinds of apicomplexan parasites and non-infected host cells microscopic images.
It uses a small number of labeled data and a large number of unlabeled data for training.
In the case where only 1% of microscopic images are labeled, the proposed method reaches an accuracy of 94.90% in a generalized testing set.
arXiv Detail & Related papers (2021-04-14T02:34:50Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.