Semi-supervised Medical Image Classification with Global Latent Mixing
- URL: http://arxiv.org/abs/2005.11217v1
- Date: Fri, 22 May 2020 14:49:13 GMT
- Title: Semi-supervised Medical Image Classification with Global Latent Mixing
- Authors: Prashnna Kumar Gyawali, Sandesh Ghimire, Pradeep Bajracharya, Zhiyuan
Li, Linwei Wang
- Abstract summary: Computer-aided diagnosis via deep learning relies on large-scale annotated data sets.
Semi-supervised learning mitigates this challenge by leveraging unlabeled data.
We present a novel SSL approach that trains the neural network on linear mixing of labeled and unlabeled data.
- Score: 8.330337646455957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer-aided diagnosis via deep learning relies on large-scale annotated
data sets, which can be costly when involving expert knowledge. Semi-supervised
learning (SSL) mitigates this challenge by leveraging unlabeled data. One
effective SSL approach is to regularize the local smoothness of neural
functions via perturbations around single data points. In this work, we argue
that regularizing the global smoothness of neural functions by filling the void
in between data points can further improve SSL. We present a novel SSL approach
that trains the neural network on linear mixing of labeled and unlabeled data,
at both the input and latent space in order to regularize different portions of
the network. We evaluated the presented model on two distinct medical image
data sets for semi-supervised classification of thoracic disease and skin
lesion, demonstrating its improved performance over SSL with local
perturbations and SSL with global mixing but at the input space only. Our code
is available at https://github.com/Prasanna1991/LatentMixing.
Related papers
- A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Self-supervised TransUNet for Ultrasound regional segmentation of the
distal radius in children [0.6291443816903801]
Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans.
This paper investigates the feasibility of deploying the Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans.
arXiv Detail & Related papers (2023-09-18T05:23:33Z) - CroSSL: Cross-modal Self-Supervised Learning for Time-series through
Latent Masking [11.616031590118014]
CroSSL allows for handling missing modalities and end-to-end cross-modal learning.
We evaluate our method on a wide range of data, including motion sensors.
arXiv Detail & Related papers (2023-07-31T17:10:10Z) - Does Decentralized Learning with Non-IID Unlabeled Data Benefit from
Self Supervision? [51.00034621304361]
We study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL)
We study the effectiveness of contrastive learning algorithms under decentralized learning settings.
arXiv Detail & Related papers (2022-10-20T01:32:41Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz
Regularization [5.848916882288327]
Semi-supervised learning (SSL) mitigates the challenge by exploiting the behavior of the neural function on large unlabeled data.
A successful example is the adoption of mixup strategy in SSL that enforces the global smoothness of the neural function.
We propose that mixup improves the smoothness of the neural function by bounding the Lipschitz constant of the gradient function of the neural networks.
arXiv Detail & Related papers (2020-09-23T23:19:19Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Semi-supervised learning objectives as log-likelihoods in a generative
model of data curation [32.45282187405337]
We formulate SSL objectives as a log-likelihood in a generative model of data curation.
We give a proof-of-principle for Bayesian SSL on toy data.
arXiv Detail & Related papers (2020-08-13T13:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.