Image-level Regression for Uncertainty-aware Retinal Image Segmentation
- URL: http://arxiv.org/abs/2405.16815v2
- Date: Thu, 25 Jul 2024 09:29:36 GMT
- Title: Image-level Regression for Uncertainty-aware Retinal Image Segmentation
- Authors: Trung Dang, Huy Hoang Nguyen, Aleksei Tiulpin,
- Abstract summary: We introduce a novel Uncertainty-Aware (SAUNA) transform, which adds pixel uncertainty to the ground truth.
Our results indicate that the integration of the SAUNA transform and these segmentation losses led to significant performance boosts for different segmentation models.
- Score: 3.7141182051230914
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate retinal vessel (RV) segmentation is a crucial step in the quantitative assessment of retinal vasculature, which is needed for the early detection of retinal diseases and other conditions. Numerous studies have been conducted to tackle the problem of segmenting vessels automatically using a pixel-wise classification approach. The common practice of creating ground truth labels is to categorize pixels as foreground and background. This approach is, however, biased, and it ignores the uncertainty of a human annotator when it comes to annotating e.g. thin vessels. In this work, we propose a simple and effective method that casts the RV segmentation task as an image-level regression. For this purpose, we first introduce a novel Segmentation Annotation Uncertainty-Aware (SAUNA) transform, which adds pixel uncertainty to the ground truth using the pixel's closeness to the annotation boundary and vessel thickness. To train our model with soft labels, we generalize the earlier proposed Jaccard metric loss to arbitrary hypercubes for soft Jaccard index (Intersection-over-Union) optimization. Additionally, we employ a stable version of the Focal-L1 loss for pixel-wise regression. We conduct thorough experiments and compare our method to a diverse set of baselines across 5 retinal image datasets. Our empirical results indicate that the integration of the SAUNA transform and these segmentation losses led to significant performance boosts for different segmentation models. Particularly, our methodology enables UNet-like architectures to substantially outperform computational-intensive baselines. Our implementation is available at \url{https://github.com/Oulu-IMEDS/SAUNA}.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Learning to Rank Patches for Unbiased Image Redundancy Reduction [80.93989115541966]
Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated.
Existing approaches strive to overcome this limitation by reducing less meaningful image regions.
We propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches.
arXiv Detail & Related papers (2024-03-31T13:12:41Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [63.54342601757723]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - Difference of Anisotropic and Isotropic TV for Segmentation under Blur
and Poisson Noise [2.6381163133447836]
We adopt a smoothing-and-thresholding (SaT) segmentation framework that finds awise-smooth solution, followed by $k-means to segment the image.
Specifically for the image smoothing step, we replace the maximum noise in the MumfordShah model with a maximum variation of anisotropic total variation (AITV) as a regularization.
Convergence analysis is provided to validate the efficacy of the scheme.
arXiv Detail & Related papers (2023-01-06T01:14:56Z) - SUPRA: Superpixel Guided Loss for Improved Multi-modal Segmentation in
Endoscopy [1.1470070927586016]
Domain shift is a well-known problem in the medical imaging community.
In this paper, we explore the domain generalisation technique to enable DL methods to be used in such scenarios.
We show that our method yields an improvement of nearly 20% in the target domain set compared to the baseline.
arXiv Detail & Related papers (2022-11-09T03:13:59Z) - SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic
Segmentation [52.62441404064957]
Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the model trained on a labeled source domain.
Many methods tend to alleviate noisy pseudo labels, however, they ignore intrinsic connections among cross-domain pixels with similar semantic concepts.
We propose Semantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixel.
arXiv Detail & Related papers (2022-04-19T11:16:29Z) - Fast Hybrid Image Retargeting [0.0]
We propose a method that quantifies and limits warping distortions with the use of content-aware cropping.
Our method outperforms recent approaches, while running in a fraction of their execution time.
arXiv Detail & Related papers (2022-03-25T11:46:06Z) - GradViT: Gradient Inversion of Vision Transformers [83.54779732309653]
We demonstrate the vulnerability of vision transformers (ViTs) to gradient-based inversion attacks.
We introduce a method, named GradViT, that optimize random noise into naturally looking images.
We observe unprecedentedly high fidelity and closeness to the original (hidden) data.
arXiv Detail & Related papers (2022-03-22T17:06:07Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Improving Image co-segmentation via Deep Metric Learning [1.5076964620370268]
We propose a novel Triplet loss for Image, called IS-Triplet loss for short, and combine it with traditional image segmentation loss.
We apply the proposed approach to image co-segmentation and test it on the SBCoseg dataset and the Internet dataset.
arXiv Detail & Related papers (2021-03-19T07:30:42Z) - Adaptive Fractional Dilated Convolution Network for Image Aesthetics
Assessment [33.945579916184364]
An adaptive fractional dilated convolution (AFDC) is developed to tackle this issue in convolutional kernel level.
We provide a concise formulation for mini-batch training and utilize a grouping strategy to reduce computational overhead.
Our experimental results demonstrate that our proposed method achieves state-of-the-art performance on image aesthetics assessment over the AVA dataset.
arXiv Detail & Related papers (2020-04-06T21:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.