A Semi-Supervised Classification Method of Apicomplexan Parasites and
Host Cell Using Contrastive Learning Strategy
- URL: http://arxiv.org/abs/2104.06593v1
- Date: Wed, 14 Apr 2021 02:34:50 GMT
- Title: A Semi-Supervised Classification Method of Apicomplexan Parasites and
Host Cell Using Contrastive Learning Strategy
- Authors: Yanni Ren and Hangyu Deng and Hao Jiang and Jinglu Hu
- Abstract summary: This paper proposes a semi-supervised classification method for three kinds of apicomplexan parasites and non-infected host cells microscopic images.
It uses a small number of labeled data and a large number of unlabeled data for training.
In the case where only 1% of microscopic images are labeled, the proposed method reaches an accuracy of 94.90% in a generalized testing set.
- Score: 6.677163460963862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common shortfall of supervised learning for medical imaging is the greedy
need for human annotations, which is often expensive and time-consuming to
obtain. This paper proposes a semi-supervised classification method for three
kinds of apicomplexan parasites and non-infected host cells microscopic images,
which uses a small number of labeled data and a large number of unlabeled data
for training. There are two challenges in microscopic image recognition. The
first is that salient structures of the microscopic images are more fuzzy and
intricate than natural images' on a real-world scale. The second is that
insignificant textures, like background staining, lightness, and contrast
level, vary a lot in samples from different clinical scenarios. To address
these challenges, we aim to learn a distinguishable and appearance-invariant
representation by contrastive learning strategy. On one hand, macroscopic
images, which share similar shape characteristics in morphology, are introduced
to contrast for structure enhancement. On the other hand, different appearance
transformations, including color distortion and flittering, are utilized to
contrast for texture elimination. In the case where only 1% of microscopic
images are labeled, the proposed method reaches an accuracy of 94.90% in a
generalized testing set.
Related papers
- DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - Siamese Networks with Soft Labels for Unsupervised Lesion Detection and
Patch Pretraining on Screening Mammograms [7.917505566910886]
We propose an alternative method that uses contralateral mammograms to train a neural network to encode similar embeddings.
Our method demonstrates superior performance in mammogram patch classification compared to existing self-supervised learning methods.
arXiv Detail & Related papers (2024-01-10T22:27:37Z) - BiomedJourney: Counterfactual Biomedical Image Generation by
Instruction-Learning from Multimodal Patient Journeys [99.7082441544384]
We present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning.
We use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression.
The resulting triples are then used to train a latent diffusion model for counterfactual biomedical image generation.
arXiv Detail & Related papers (2023-10-16T18:59:31Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Artifact-Tolerant Clustering-Guided Contrastive Embedding Learning for
Ophthalmic Images [18.186766129476077]
We propose an artifact-tolerant unsupervised learning framework termed EyeLearn for learning representations of ophthalmic images.
EyeLearn has an artifact correction module to learn representations that can best predict artifact-free ophthalmic images.
To evaluate EyeLearn, we use the learned representations for visual field prediction and glaucoma detection using a real-world ophthalmic image dataset of glaucoma patients.
arXiv Detail & Related papers (2022-09-02T01:25:45Z) - Magnification-independent Histopathological Image Classification with
Similarity-based Multi-scale Embeddings [12.398787062519034]
We propose an approach that learns similarity-based multi-scale embeddings for magnification-independent image classification.
In particular, a pair loss and a triplet loss are leveraged to learn similarity-based embeddings from image pairs or image triplets.
The SMSE achieves the best performance on the BreakHis benchmark with an improvement ranging from 5% to 18% compared to previous methods.
arXiv Detail & Related papers (2021-07-02T13:18:45Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Microscopic fine-grained instance classification through deep attention [7.50282814989294]
Fine-grained classification of microscopic image data with limited samples is an open problem in computer vision and biomedical imaging.
We propose a simple yet effective deep network that performs two tasks simultaneously in an end-to-end manner.
The result is a robust but lightweight end-to-end trainable deep network that yields state-of-the-art results.
arXiv Detail & Related papers (2020-10-06T15:29:58Z) - Melanoma Detection using Adversarial Training and Deep Transfer Learning [6.22964000148682]
We propose a two-stage framework for automatic classification of skin lesion images.
In the first stage, we leverage the inter-class variation of the data distribution for the task of conditional image synthesis.
In the second stage, we train a deep convolutional neural network for skin lesion classification.
arXiv Detail & Related papers (2020-04-14T22:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.