Scribble2D5: Weakly-Supervised Volumetric Image Segmentation via
Scribble Annotations
- URL: http://arxiv.org/abs/2205.06779v1
- Date: Fri, 13 May 2022 17:04:10 GMT
- Title: Scribble2D5: Weakly-Supervised Volumetric Image Segmentation via
Scribble Annotations
- Authors: Qiuhui Chen, Yi Hong
- Abstract summary: weakly-supervised image segmentation using weak annotations like scribbles has gained great attention.
Existing scribble-based methods suffer from poor boundary localization.
We propose Scribble2D5, which tackles 3D anisotropic image segmentation and improves boundary prediction.
- Score: 5.400947282838267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, weakly-supervised image segmentation using weak annotations like
scribbles has gained great attention, since such annotations are much easier to
obtain compared to time-consuming and label-intensive labeling at the
pixel/voxel level. However, because scribbles lack structure information of
region of interest (ROI), existing scribble-based methods suffer from poor
boundary localization. Furthermore, most current methods are designed for 2D
image segmentation, which do not fully leverage the volumetric information if
directly applied to image slices. In this paper, we propose a scribble-based
volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image
segmentation and improves boundary prediction. To achieve this, we augment a
2.5D attention UNet with a proposed label propagation module to extend semantic
information from scribbles and a combination of static and active boundary
prediction to learn ROI's boundary and regularize its shape. Extensive
experiments on three public datasets demonstrate Scribble2D5 significantly
outperforms current scribble-based methods and approaches the performance of
fully-supervised ones. Our code is available online.
Related papers
- Bayesian Self-Training for Semi-Supervised 3D Segmentation [59.544558398992386]
3D segmentation is a core problem in computer vision.
densely labeling 3D point clouds to employ fully-supervised training remains too labor intensive and expensive.
Semi-supervised training provides a more practical alternative, where only a small set of labeled data is given, accompanied by a larger unlabeled set.
arXiv Detail & Related papers (2024-09-12T14:54:31Z) - Open-Vocabulary 3D Semantic Segmentation with Text-to-Image Diffusion Models [57.37244894146089]
We propose Diff2Scene, which leverages frozen representations from text-image generative models, along with salient-aware and geometric-aware masks, for open-vocabulary 3D semantic segmentation and visual grounding tasks.
We show that it outperforms competitive baselines and achieves significant improvements over state-of-the-art methods.
arXiv Detail & Related papers (2024-07-18T16:20:56Z) - 2D Feature Distillation for Weakly- and Semi-Supervised 3D Semantic
Segmentation [92.17700318483745]
We propose an image-guidance network (IGNet) which builds upon the idea of distilling high level feature information from a domain adapted synthetically trained 2D semantic segmentation network.
IGNet achieves state-of-the-art results for weakly-supervised LiDAR semantic segmentation on ScribbleKITTI, boasting up to 98% relative performance to fully supervised training with only 8% labeled points.
arXiv Detail & Related papers (2023-11-27T07:57:29Z) - Volumetric Medical Image Segmentation via Scribble Annotations and Shape
Priors [3.774643767869751]
We propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation.
To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles.
Also, we propose an optional add-on component, which incorporates the shape prior information from unpaired segmentation masks to further improve model accuracy.
arXiv Detail & Related papers (2023-10-12T07:17:14Z) - 3D Medical Image Segmentation with Sparse Annotation via Cross-Teaching
between 3D and 2D Networks [26.29122638813974]
We propose a framework that can robustly learn from sparse annotation using the cross-teaching of both 3D and 2D networks.
Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods.
arXiv Detail & Related papers (2023-07-30T15:26:17Z) - SwIPE: Efficient and Robust Medical Image Segmentation with Implicit Patch Embeddings [12.79344668998054]
We propose SwIPE (Segmentation with Implicit Patch Embeddings) to enable accurate local boundary delineation and global shape coherence.
We show that SwIPE significantly improves over recent implicit approaches and outperforms state-of-the-art discrete methods with over 10x fewer parameters.
arXiv Detail & Related papers (2023-07-23T20:55:11Z) - Image Understands Point Cloud: Weakly Supervised 3D Semantic
Segmentation via Association Learning [59.64695628433855]
We propose a novel cross-modality weakly supervised method for 3D segmentation, incorporating complementary information from unlabeled images.
Basically, we design a dual-branch network equipped with an active labeling strategy, to maximize the power of tiny parts of labels.
Our method even outperforms the state-of-the-art fully supervised competitors with less than 1% actively selected annotations.
arXiv Detail & Related papers (2022-09-16T07:59:04Z) - Noisy Boundaries: Lemon or Lemonade for Semi-supervised Instance
Segmentation? [59.25833574373718]
We construct a framework for semi-supervised instance segmentation by assigning pixel-level pseudo labels.
Under this framework, we point out that noisy boundaries associated with pseudo labels are double-edged.
We propose to exploit and resist them in a unified manner simultaneously.
arXiv Detail & Related papers (2022-03-25T03:06:24Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - 3D Guided Weakly Supervised Semantic Segmentation [27.269847900950943]
We propose a weakly supervised 2D semantic segmentation model by incorporating sparse bounding box labels with available 3D information.
We manually labeled a subset of the 2D-3D Semantics(2D-3D-S) dataset with bounding boxes, and introduce our 2D-3D inference module to generate accurate pixel-wise segment proposal masks.
arXiv Detail & Related papers (2020-12-01T03:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.