Contrastive Rendering for Ultrasound Image Segmentation
- URL: http://arxiv.org/abs/2010.04928v1
- Date: Sat, 10 Oct 2020 07:14:03 GMT
- Title: Contrastive Rendering for Ultrasound Image Segmentation
- Authors: Haoming Li, Xin Yang, Jiamin Liang, Wenlong Shi, Chaoyu Chen, Haoran
Dou, Rui Li, Rui Gao, Guangquan Zhou, Jinghui Fang, Xiaowen Liang, Ruobing
Huang, Alejandro Frangi, Zhiyi Chen, Dong Ni
- Abstract summary: The lack of sharp boundaries in US images remains an inherent challenge for segmentation.
We propose a novel and effective framework to improve boundary estimation in US images.
Our proposed method outperforms state-of-the-art methods and has the potential to be used in clinical practice.
- Score: 59.23915581079123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) image segmentation embraced its significant improvement in
deep learning era. However, the lack of sharp boundaries in US images still
remains an inherent challenge for segmentation. Previous methods often resort
to global context, multi-scale cues or auxiliary guidance to estimate the
boundaries. It is hard for these methods to approach pixel-level learning for
fine-grained boundary generating. In this paper, we propose a novel and
effective framework to improve boundary estimation in US images. Our work has
three highlights. First, we propose to formulate the boundary estimation as a
rendering task, which can recognize ambiguous points (pixels/voxels) and
calibrate the boundary prediction via enriched feature representation learning.
Second, we introduce point-wise contrastive learning to enhance the similarity
of points from the same class and contrastively decrease the similarity of
points from different classes. Boundary ambiguities are therefore further
addressed. Third, both rendering and contrastive learning tasks contribute to
consistent improvement while reducing network parameters. As a
proof-of-concept, we performed validation experiments on a challenging dataset
of 86 ovarian US volumes. Results show that our proposed method outperforms
state-of-the-art methods and has the potential to be used in clinical practice.
Related papers
- Classification of Breast Cancer Histopathology Images using a Modified Supervised Contrastive Learning Method [4.303291247305105]
We improve the supervised contrastive learning method by leveraging both image-level labels and domain-specific augmentations to enhance model robustness.
We evaluate our method on the BreakHis dataset, which consists of breast cancer histopathology images.
This improvement corresponds to 93.63% absolute accuracy, highlighting the effectiveness of our approach in leveraging properties of data to learn more appropriate representation space.
arXiv Detail & Related papers (2024-05-06T17:06:11Z) - SwIPE: Efficient and Robust Medical Image Segmentation with Implicit Patch Embeddings [12.79344668998054]
We propose SwIPE (Segmentation with Implicit Patch Embeddings) to enable accurate local boundary delineation and global shape coherence.
We show that SwIPE significantly improves over recent implicit approaches and outperforms state-of-the-art discrete methods with over 10x fewer parameters.
arXiv Detail & Related papers (2023-07-23T20:55:11Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Noisy Boundaries: Lemon or Lemonade for Semi-supervised Instance
Segmentation? [59.25833574373718]
We construct a framework for semi-supervised instance segmentation by assigning pixel-level pseudo labels.
Under this framework, we point out that noisy boundaries associated with pseudo labels are double-edged.
We propose to exploit and resist them in a unified manner simultaneously.
arXiv Detail & Related papers (2022-03-25T03:06:24Z) - Contrastive Boundary Learning for Point Cloud Segmentation [81.7289734276872]
We propose a novel contrastive boundary learning framework for point cloud segmentation.
We experimentally show that CBL consistently improves different baselines and assists them to achieve compelling performance on boundaries.
arXiv Detail & Related papers (2022-03-10T10:08:09Z) - Exploring Feature Representation Learning for Semi-supervised Medical
Image Segmentation [30.608293915653558]
We present a two-stage framework for semi-supervised medical image segmentation.
Key insight is to explore the feature representation learning with labeled and unlabeled (i.e., pseudo labeled) images.
A stage-adaptive contrastive learning method is proposed, containing a boundary-aware contrastive loss.
We present an aleatoric uncertainty-aware method, namely AUA, to generate higher-quality pseudo labels.
arXiv Detail & Related papers (2021-11-22T05:06:12Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - BoundarySqueeze: Image Segmentation as Boundary Squeezing [104.43159799559464]
We propose a novel method for fine-grained high-quality image segmentation of both objects and scenes.
Inspired by dilation and erosion from morphological image processing techniques, we treat the pixel level segmentation problems as squeezing object boundary.
Our method yields large gains on COCO, Cityscapes, for both instance and semantic segmentation and outperforms previous state-of-the-art PointRend in both accuracy and speed under the same setting.
arXiv Detail & Related papers (2021-05-25T04:58:51Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.