Contrastive Learning of Single-Cell Phenotypic Representations for
Treatment Classification
- URL: http://arxiv.org/abs/2103.16670v1
- Date: Tue, 30 Mar 2021 20:29:04 GMT
- Title: Contrastive Learning of Single-Cell Phenotypic Representations for
Treatment Classification
- Authors: Alexis Perakis, Ali Gorji, Samriddhi Jain, Krishna Chaitanya, Simone
Rizza, Ender Konukoglu
- Abstract summary: Drug development efforts typically analyse thousands of cell images to screen for potential treatments.
We leverage a contrastive learning framework to learn appropriate representations from single-cell fluorescent microscopy images.
We observe an improvement of 10% in NCSB accuracy and 11% in NSC-NSCB drop over the previously best unsupervised method.
- Score: 6.4265933507484005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning robust representations to discriminate cell phenotypes based on
microscopy images is important for drug discovery. Drug development efforts
typically analyse thousands of cell images to screen for potential treatments.
Early works focus on creating hand-engineered features from these images or
learn such features with deep neural networks in a fully or weakly-supervised
framework. Both require prior knowledge or labelled datasets. Therefore,
subsequent works propose unsupervised approaches based on generative models to
learn these representations. Recently, representations learned with
self-supervised contrastive loss-based methods have yielded state-of-the-art
results on various imaging tasks compared to earlier unsupervised approaches.
In this work, we leverage a contrastive learning framework to learn appropriate
representations from single-cell fluorescent microscopy images for the task of
Mechanism-of-Action classification. The proposed work is evaluated on the
annotated BBBC021 dataset, and we obtain state-of-the-art results in NSC, NCSB
and drop metrics for an unsupervised approach. We observe an improvement of 10%
in NCSB accuracy and 11% in NSC-NSCB drop over the previously best unsupervised
method. Moreover, the performance of our unsupervised approach ties with the
best supervised approach. Additionally, we observe that our framework performs
well even without post-processing, unlike earlier methods. With this, we
conclude that one can learn robust cell representations with contrastive
learning.
Related papers
- CUCL: Codebook for Unsupervised Continual Learning [129.91731617718781]
The focus of this study is on Unsupervised Continual Learning (UCL), as it presents an alternative to Supervised Continual Learning.
We propose a method named Codebook for Unsupervised Continual Learning (CUCL) which promotes the model to learn discriminative features to complete the class boundary.
Our method significantly boosts the performances of supervised and unsupervised methods.
arXiv Detail & Related papers (2023-11-25T03:08:50Z) - Nucleus-aware Self-supervised Pretraining Using Unpaired Image-to-image
Translation for Histopathology Images [3.8391355786589805]
We propose a novel nucleus-aware self-supervised pretraining framework for histopathology images.
The framework aims to capture the nuclear morphology and distribution information through unpaired image-to-image translation.
The experiments on 7 datasets show that the proposed pretraining method outperforms supervised ones on Kather classification, multiple instance learning, and 5 dense-prediction tasks.
arXiv Detail & Related papers (2023-09-14T02:31:18Z) - Multi-Scale Cross Contrastive Learning for Semi-Supervised Medical Image
Segmentation [14.536384387956527]
We develop a novel Multi-Scale Cross Supervised Contrastive Learning framework to segment structures in medical images.
Our approach contrasts multi-scale features based on ground-truth and cross-predicted labels, in order to extract robust feature representations.
It outperforms state-of-the-art semi-supervised methods by more than 3.0% in Dice.
arXiv Detail & Related papers (2023-06-25T16:55:32Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Few-Shot Classification of Skin Lesions from Dermoscopic Images by
Meta-Learning Representative Embeddings [1.957558771641347]
Annotated images and ground truth for diagnosis of rare and novel diseases are scarce.
Few-shot learning, and meta-learning in general, aim to overcome these issues by aiming to perform well in low data regimes.
This paper focuses on improving meta-learning for the classification of dermoscopic images.
arXiv Detail & Related papers (2022-10-30T21:27:15Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Preservational Learning Improves Self-supervised Medical Image Models by
Reconstructing Diverse Contexts [58.53111240114021]
We present Preservational Contrastive Representation Learning (PCRL) for learning self-supervised medical representations.
PCRL provides very competitive results under the pretraining-finetuning protocol, outperforming both self-supervised and supervised counterparts in 5 classification/segmentation tasks substantially.
arXiv Detail & Related papers (2021-09-09T16:05:55Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.