Label-efficient Contrastive Learning-based model for nuclei detection
and classification in 3D Cardiovascular Immunofluorescent Images
- URL: http://arxiv.org/abs/2309.03744v3
- Date: Mon, 15 Jan 2024 01:49:38 GMT
- Title: Label-efficient Contrastive Learning-based model for nuclei detection
and classification in 3D Cardiovascular Immunofluorescent Images
- Authors: Nazanin Moradinasab, Rebecca A. Deaton, Laura S. Shankman, Gary K.
Owens, Donald E. Brown
- Abstract summary: Training deep learning-based methods requires a large amount of pixel-wise annotated data.
We propose the Label-efficient Contrastive learning-based (LECL) model to detect and classify various types of nuclei in 3D immunofluorescent images.
- Score: 0.8812173669205372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep learning-based methods achieved promising performance in
nuclei detection and classification applications. However, training deep
learning-based methods requires a large amount of pixel-wise annotated data,
which is time-consuming and labor-intensive, especially in 3D images. An
alternative approach is to adapt weak-annotation methods, such as labeling each
nucleus with a point, but this method does not extend from 2D histopathology
images (for which it was originally developed) to 3D immunofluorescent images.
The reason is that 3D images contain multiple channels (z-axis) for nuclei and
different markers separately, which makes training using point annotations
difficult. To address this challenge, we propose the Label-efficient
Contrastive learning-based (LECL) model to detect and classify various types of
nuclei in 3D immunofluorescent images. Previous methods use Maximum Intensity
Projection (MIP) to convert immunofluorescent images with multiple slices to 2D
images, which can cause signals from different z-stacks to falsely appear
associated with each other. To overcome this, we devised an Extended Maximum
Intensity Projection (EMIP) approach that addresses issues using MIP.
Furthermore, we performed a Supervised Contrastive Learning (SCL) approach for
weakly supervised settings. We conducted experiments on cardiovascular datasets
and found that our proposed framework is effective and efficient in detecting
and classifying various types of nuclei in 3D immunofluorescent images.
Related papers
- Fine-grained Image-to-LiDAR Contrastive Distillation with Visual Foundation Models [55.99654128127689]
Visual Foundation Models (VFMs) are used to enhance 3D representation learning.
VFMs generate semantic labels for weakly-supervised pixel-to-point contrastive distillation.
We adapt sampling probabilities of points to address imbalances in spatial distribution and category frequency.
arXiv Detail & Related papers (2024-05-23T07:48:19Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - Augment and Criticize: Exploring Informative Samples for Semi-Supervised
Monocular 3D Object Detection [64.65563422852568]
We improve the challenging monocular 3D object detection problem with a general semi-supervised framework.
We introduce a novel, simple, yet effective Augment and Criticize' framework that explores abundant informative samples from unlabeled data.
The two new detectors, dubbed 3DSeMo_DLE and 3DSeMo_FLEX, achieve state-of-the-art results with remarkable improvements for over 3.5% AP_3D/BEV (Easy) on KITTI.
arXiv Detail & Related papers (2023-03-20T16:28:15Z) - HybridMIM: A Hybrid Masked Image Modeling Framework for 3D Medical Image
Segmentation [29.15746532186427]
HybridMIM is a novel hybrid self-supervised learning method based on masked image modeling for 3D medical image segmentation.
We learn the semantic information of medical images at three levels, including:1) partial region prediction to reconstruct key contents of the 3D image, which largely reduces the pre-training time burden.
The proposed framework is versatile to support both CNN and transformer as encoder backbones, and also enables to pre-train decoders for image segmentation.
arXiv Detail & Related papers (2023-03-18T04:43:12Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - Multiple Sclerosis Lesions Segmentation using Attention-Based CNNs in
FLAIR Images [0.2578242050187029]
Multiple Sclerosis (MS) is an autoimmune, and demyelinating disease that leads to lesions in the central nervous system.
Up to now a multitude of multimodality automatic biomedical approaches is used to segment lesions.
Authors propose a method employing just one modality (FLAIR image) to segment MS lesions accurately.
arXiv Detail & Related papers (2022-01-05T21:37:43Z) - Dopamine Transporter SPECT Image Classification for Neurodegenerative
Parkinsonism via Diffusion Maps and Machine Learning Classifiers [0.0]
This study aims to provide an automatic and robust method to classify the SPECT images into two types, namely Normal and Abnormal DaT-SPECT image groups.
The 3D images of N patients are mapped to an N by N pairwise distance matrix and training set are embedded into a low-dimensional space by using diffusion maps.
The feasibility of the method is demonstrated via Parkinsonism Progression Markers Initiative (PPMI) dataset of 1097 subjects and a clinical cohort from Kaohsiung Chang Gung Memorial Hospital (KCGMH-TW) of 630 patients.
arXiv Detail & Related papers (2021-04-06T06:30:15Z) - Attention Model Enhanced Network for Classification of Breast Cancer
Image [54.83246945407568]
AMEN is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular.
To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch.
Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:44:21Z) - 3D Self-Supervised Methods for Medical Imaging [7.65168530693281]
We propose 3D versions for five different self-supervised methods, in the form of proxy tasks.
Our methods facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation.
The developed algorithms are 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks.
arXiv Detail & Related papers (2020-06-06T09:56:58Z) - Weakly Supervised PET Tumor Detection Using Class Response [3.947298454012977]
We present a novel approach to locate different type of lesions in positron emission tomography (PET) images using only a class label at the image-level.
The advantage of our proposed method consists of detecting the whole tumor volume in 3D images, using only two 2D images of PET image, and showing a very promising results.
arXiv Detail & Related papers (2020-03-18T17:06:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.