Contrastive Learning for Mitochondria Segmentation
- URL: http://arxiv.org/abs/2109.12363v1
- Date: Sat, 25 Sep 2021 13:15:26 GMT
- Title: Contrastive Learning for Mitochondria Segmentation
- Authors: Zhili Li, Xuejin Chen, Jie Zhao and Zhiwei Xiong
- Abstract summary: We propose a novel contrastive learning framework to learn a better feature representation from hard examples to improve mitochondrial segmentation.
We show the effectiveness of our method on MitoEM dataset as well as FIB-SEM dataset and show better or on par with state-of-the-art results.
- Score: 42.800475494933146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mitochondria segmentation in electron microscopy images is essential in
neuroscience. However, due to the image degradation during the imaging process,
the large variety of mitochondrial structures, as well as the presence of
noise, artifacts and other sub-cellular structures, mitochondria segmentation
is very challenging. In this paper, we propose a novel and effective
contrastive learning framework to learn a better feature representation from
hard examples to improve segmentation. Specifically, we adopt a point sampling
strategy to pick out representative pixels from hard examples in the training
phase. Based on these sampled pixels, we introduce a pixel-wise label-based
contrastive loss which consists of a similarity loss term and a consistency
loss term. The similarity term can increase the similarity of pixels from the
same class and the separability of pixels from different classes in feature
space, while the consistency term is able to enhance the sensitivity of the 3D
model to changes in image content from frame to frame. We demonstrate the
effectiveness of our method on MitoEM dataset as well as FIB-SEM dataset and
show better or on par with state-of-the-art results.
Related papers
- Local Manifold Learning for No-Reference Image Quality Assessment [68.9577503732292]
We propose an innovative framework that integrates local manifold learning with contrastive learning for No-Reference Image Quality Assessment (NR-IQA)
Our approach demonstrates a better performance compared to state-of-the-art methods in 7 standard datasets.
arXiv Detail & Related papers (2024-06-27T15:14:23Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Contrastive Image Synthesis and Self-supervised Feature Adaptation for
Cross-Modality Biomedical Image Segmentation [8.772764547425291]
CISFA builds on image domain translation and unsupervised feature adaptation for cross-modality biomedical image segmentation.
We use a one-sided generative model and add a weighted patch-wise contrastive loss between sampled patches of the input image and the corresponding synthetic image.
We evaluate our methods on segmentation tasks containing CT and MRI images for abdominal cavities and whole hearts.
arXiv Detail & Related papers (2022-07-27T01:49:26Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - Learning of Inter-Label Geometric Relationships Using Self-Supervised
Learning: Application To Gleason Grade Segmentation [4.898744396854313]
We propose a method to synthesize for PCa histopathology images by learning the geometrical relationship between different disease labels.
We use a weakly supervised segmentation approach that uses Gleason score to segment the diseased regions.
The resulting segmentation map is used to train a Shape Restoration Network (ShaRe-Net) to predict missing mask segments.
arXiv Detail & Related papers (2021-10-01T13:47:07Z) - Magnification-independent Histopathological Image Classification with
Similarity-based Multi-scale Embeddings [12.398787062519034]
We propose an approach that learns similarity-based multi-scale embeddings for magnification-independent image classification.
In particular, a pair loss and a triplet loss are leveraged to learn similarity-based embeddings from image pairs or image triplets.
The SMSE achieves the best performance on the BreakHis benchmark with an improvement ranging from 5% to 18% compared to previous methods.
arXiv Detail & Related papers (2021-07-02T13:18:45Z) - CT Image Synthesis Using Weakly Supervised Segmentation and Geometric
Inter-Label Relations For COVID Image Analysis [4.898744396854313]
We propose improvements over previous GAN-based medical image synthesis methods by learning the relationship between different anatomical labels.
We use the synthetic images from our method to train networks for segmenting COVID-19 infected areas from lung CT images.
arXiv Detail & Related papers (2021-06-15T07:21:24Z) - Retinal Image Segmentation with a Structure-Texture Demixing Network [62.69128827622726]
The complex structure and texture information are mixed in a retinal image, and distinguishing the information is difficult.
Existing methods handle texture and structure jointly, which may lead biased models toward recognizing textures and thus results in inferior segmentation performance.
We propose a segmentation strategy that seeks to separate structure and texture components and significantly improve the performance.
arXiv Detail & Related papers (2020-07-15T12:19:03Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.