Towards Efficient and Accurate CT Segmentation via Edge-Preserving Probabilistic Downsampling
- URL: http://arxiv.org/abs/2404.03991v1
- Date: Fri, 5 Apr 2024 10:01:31 GMT
- Title: Towards Efficient and Accurate CT Segmentation via Edge-Preserving Probabilistic Downsampling
- Authors: Shahzad Ali, Yu Rim Lee, Soo Young Park, Won Young Tak, Soon Ki Jung,
- Abstract summary: Downsampling images and labels, often necessitated by limited resources or to expedite network training, leads to the loss of small objects and thin boundaries.
This undermines the segmentation network's capacity to interpret images accurately and predict detailed labels, resulting in diminished performance compared to processing at original resolutions.
We introduce a novel method named Edge-preserving Probabilistic Downsampling (EPD)
It utilizes class uncertainty within a local window to produce soft labels, with the window size dictating the downsampling factor.
- Score: 2.1465347972460367
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Downsampling images and labels, often necessitated by limited resources or to expedite network training, leads to the loss of small objects and thin boundaries. This undermines the segmentation network's capacity to interpret images accurately and predict detailed labels, resulting in diminished performance compared to processing at original resolutions. This situation exemplifies the trade-off between efficiency and accuracy, with higher downsampling factors further impairing segmentation outcomes. Preserving information during downsampling is especially critical for medical image segmentation tasks. To tackle this challenge, we introduce a novel method named Edge-preserving Probabilistic Downsampling (EPD). It utilizes class uncertainty within a local window to produce soft labels, with the window size dictating the downsampling factor. This enables a network to produce quality predictions at low resolutions. Beyond preserving edge details more effectively than conventional nearest-neighbor downsampling, employing a similar algorithm for images, it surpasses bilinear interpolation in image downsampling, enhancing overall performance. Our method significantly improved Intersection over Union (IoU) to 2.85%, 8.65%, and 11.89% when downsampling data to 1/2, 1/4, and 1/8, respectively, compared to conventional interpolation methods.
Related papers
- Image-level Regression for Uncertainty-aware Retinal Image Segmentation [3.7141182051230914]
We introduce a novel Uncertainty-Aware (SAUNA) transform, which adds pixel uncertainty to the ground truth.
Our results indicate that the integration of the SAUNA transform and these segmentation losses led to significant performance boosts for different segmentation models.
arXiv Detail & Related papers (2024-05-27T04:17:10Z) - MB-RACS: Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network [65.1004435124796]
We propose a Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network (MB-RACS) framework.
Our experiments demonstrate that the proposed MB-RACS method surpasses current leading methods.
arXiv Detail & Related papers (2024-01-19T04:40:20Z) - Improving Feature Stability during Upsampling -- Spectral Artifacts and the Importance of Spatial Context [15.351461000403074]
Pixel-wise predictions are required in a wide variety of tasks such as image restoration, image segmentation, or disparity estimation.
Previous works have shown that resampling operations are subject to artifacts such as aliasing.
We show that the availability of large spatial context during upsampling allows to provide stable, high-quality pixel-wise predictions.
arXiv Detail & Related papers (2023-11-29T10:53:05Z) - Guided Linear Upsampling [8.819059777836628]
Guided upsampling is an effective approach for accelerating high-resolution image processing.
Our method can better preserve detail effects while suppressing artifacts such as bleeding and blurring.
We demonstrate the advantages of our method for both interactive image editing and real-time high-resolution video processing.
arXiv Detail & Related papers (2023-07-13T08:04:24Z) - Soft labelling for semantic segmentation: Bringing coherence to label
down-sampling [1.797129499170058]
In semantic segmentation, down-sampling is commonly performed due to limited resources.
We propose a novel framework for label down-sampling via soft-labeling.
This proposal also produces reliable annotations for under-represented semantic classes.
arXiv Detail & Related papers (2023-02-27T17:02:30Z) - Boosting Few-shot Fine-grained Recognition with Background Suppression
and Foreground Alignment [53.401889855278704]
Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples.
We propose a two-stage background suppression and foreground alignment framework, which is composed of a background activation suppression (BAS) module, a foreground object alignment (FOA) module, and a local to local (L2L) similarity metric.
Experiments conducted on multiple popular fine-grained benchmarks demonstrate that our method outperforms the existing state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-10-04T07:54:40Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Learning to Downsample for Segmentation of Ultra-High Resolution Images [6.432524678252553]
We show that learning the spatially varying downsampling strategy jointly with segmentation offers advantages in segmenting large images with limited computational budget.
Our method adapts the sampling density over different locations so that more samples are collected from the small important regions and less from the others.
We show on two public and one local high-resolution datasets that our method consistently learns sampling locations preserving more information and boosting segmentation accuracy over baseline methods.
arXiv Detail & Related papers (2021-09-22T23:04:59Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z) - Learning Affinity-Aware Upsampling for Deep Image Matting [83.02806488958399]
We show that learning affinity in upsampling provides an effective and efficient approach to exploit pairwise interactions in deep networks.
In particular, results on the Composition-1k matting dataset show that A2U achieves a 14% relative improvement in the SAD metric against a strong baseline.
Compared with the state-of-the-art matting network, we achieve 8% higher performance with only 40% model complexity.
arXiv Detail & Related papers (2020-11-29T05:09:43Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.