UM-CAM: Uncertainty-weighted Multi-resolution Class Activation Maps for
Weakly-supervised Fetal Brain Segmentation
- URL: http://arxiv.org/abs/2306.11490v1
- Date: Tue, 20 Jun 2023 12:21:13 GMT
- Title: UM-CAM: Uncertainty-weighted Multi-resolution Class Activation Maps for
Weakly-supervised Fetal Brain Segmentation
- Authors: Jia Fu, Tao Lu, Shaoting Zhang, Guotai Wang
- Abstract summary: We propose a novel weakly-supervised method with image-level labels based on semantic features and context information exploration.
Our proposed method outperforms state-of-the-art weakly-supervised methods with image-level labels.
- Score: 15.333308330432176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate segmentation of the fetal brain from Magnetic Resonance Image (MRI)
is important for prenatal assessment of fetal development. Although deep
learning has shown the potential to achieve this task, it requires a large fine
annotated dataset that is difficult to collect. To address this issue,
weakly-supervised segmentation methods with image-level labels have gained
attention, which are commonly based on class activation maps from a
classification network trained with image tags. However, most of these methods
suffer from incomplete activation regions, due to the low-resolution
localization without detailed boundary cues. To this end, we propose a novel
weakly-supervised method with image-level labels based on semantic features and
context information exploration. We first propose an Uncertainty-weighted
Multi-resolution Class Activation Map (UM-CAM) to generate high-quality
pixel-level supervision. Then, we design a Geodesic distance-based Seed
Expansion (GSE) method to provide context information for rectifying the
ambiguous boundaries of UM-CAM. Extensive experiments on a fetal brain dataset
show that our UM-CAM can provide more accurate activation regions with fewer
false positive regions than existing CAM variants, and our proposed method
outperforms state-of-the-art weakly-supervised methods with image-level labels.
Related papers
- Progressive Feature Self-reinforcement for Weakly Supervised Semantic
Segmentation [55.69128107473125]
We propose a single-stage approach for Weakly Supervised Semantic (WSSS) with image-level labels.
We adaptively partition the image content into deterministic regions (e.g., confident foreground and background) and uncertain regions (e.g., object boundaries and misclassified categories) for separate processing.
Building upon this, we introduce a complementary self-enhancement method that constrains the semantic consistency between these confident regions and an augmented image with the same class labels.
arXiv Detail & Related papers (2023-12-14T13:21:52Z) - Multi-spectral Class Center Network for Face Manipulation Detection and Localization [52.569170436393165]
We propose a novel Multi-Spectral Class Center Network (MSCCNet) for face manipulation detection and localization.
Based on the features of different frequency bands, the MSCC module collects multi-spectral class centers and computes pixel-to-class relations.
Applying multi-spectral class-level representations suppresses the semantic information of the visual concepts which is insensitive to manipulated regions of forgery images.
arXiv Detail & Related papers (2023-05-18T08:09:20Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Exploring Weakly Supervised Semantic Segmentation Ensembles for Medical
Imaging Systems [11.693197342734152]
We propose a framework for reliable classification and detection of medical conditions in images.
Our framework achieves that by first utilizing lower threshold CAMs to cover the target object with high certainty.
We have demonstrated an improved dice score of up to 8% on BRATS and 6% on DECATHLON datasets.
arXiv Detail & Related papers (2023-03-14T13:31:05Z) - MSCDA: Multi-level Semantic-guided Contrast Improves Unsupervised Domain
Adaptation for Breast MRI Segmentation in Small Datasets [5.272836235045653]
We propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation framework.
Our approach incorporates self-training with contrastive learning to align feature representations between domains.
In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts.
arXiv Detail & Related papers (2023-01-04T19:16:55Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Self-supervised Image-specific Prototype Exploration for Weakly
Supervised Semantic Segmentation [72.33139350241044]
Weakly Supervised Semantic COCO (WSSS) based on image-level labels has attracted much attention due to low annotation costs.
We propose a Self-supervised Image-specific Prototype Exploration (SIPE) that consists of an Image-specific Prototype Exploration (IPE) and a General-Specific Consistency (GSC) loss.
Our SIPE achieves new state-of-the-art performance using only image-level labels.
arXiv Detail & Related papers (2022-03-06T09:01:03Z) - Region-level Active Learning for Cluttered Scenes [60.93811392293329]
We introduce a new strategy that subsumes previous Image-level and Object-level approaches into a generalized, Region-level approach.
We show that this approach significantly decreases labeling effort and improves rare object search on realistic data with inherent class-imbalance and cluttered scenes.
arXiv Detail & Related papers (2021-08-20T14:02:38Z) - Attention Model Enhanced Network for Classification of Breast Cancer
Image [54.83246945407568]
AMEN is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular.
To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch.
Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:44:21Z) - Manifold-driven Attention Maps for Weakly Supervised Segmentation [9.289524646688244]
We propose a manifold driven attention-based network to enhance visual salient regions.
Our method generates superior attention maps directly during inference without the need of extra computations.
arXiv Detail & Related papers (2020-04-07T00:03:28Z) - Towards Interpretable Semantic Segmentation via Gradient-weighted Class
Activation Mapping [71.91734471596432]
We propose SEG-GRAD-CAM, a gradient-based method for interpreting semantic segmentation.
Our method is an extension of the widely-used Grad-CAM method, applied locally to produce heatmaps showing the relevance of individual pixels for semantic segmentation.
arXiv Detail & Related papers (2020-02-26T12:32:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.