Steerable Pyramid Weighted Loss: Multi-Scale Adaptive Weighting for Semantic Segmentation
- URL: http://arxiv.org/abs/2503.06604v1
- Date: Sun, 09 Mar 2025 13:15:01 GMT
- Title: Steerable Pyramid Weighted Loss: Multi-Scale Adaptive Weighting for Semantic Segmentation
- Authors: Renhao Lu,
- Abstract summary: We propose a novel steerable pyramid-based weighted (SPW) loss function that efficiently generates adaptive weight maps.<n>Our results demonstrate that the proposed SPW loss function achieves superior pixel precision and segmentation accuracy with minimal computational overhead.
- Score: 0.4662017507844857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation is a core task in computer vision with applications in biomedical imaging, remote sensing, and autonomous driving. While standard loss functions such as cross-entropy and Dice loss perform well in general cases, they often struggle with fine structures, particularly in tasks involving thin structures or closely packed objects. Various weight map-based loss functions have been proposed to address this issue by assigning higher loss weights to pixels prone to misclassification. However, these methods typically rely on precomputed or runtime-generated weight maps based on distance transforms, which impose significant computational costs and fail to adapt to evolving network predictions. In this paper, we propose a novel steerable pyramid-based weighted (SPW) loss function that efficiently generates adaptive weight maps. Unlike traditional boundary-aware losses that depend on static or iteratively updated distance maps, our method leverages steerable pyramids to dynamically emphasize regions across multiple frequency bands (capturing features at different scales) while maintaining computational efficiency. Additionally, by incorporating network predictions into the weight computation, our approach enables adaptive refinement during training. We evaluate our method on the SNEMI3D, GlaS, and DRIVE datasets, benchmarking it against 11 state-of-the-art loss functions. Our results demonstrate that the proposed SPW loss function achieves superior pixel precision and segmentation accuracy with minimal computational overhead. This work provides an effective and efficient solution for improving semantic segmentation, particularly for applications requiring multiscale feature representation. The code is avaiable at https://anonymous.4open.science/r/SPW-0884
Related papers
- Complex Wavelet Mutual Information Loss: A Multi-Scale Loss Function for Semantic Segmentation [0.4662017507844857]
We propose a novel loss function that leverages mutual information from subband images decomposed by a complex steerable pyramid.<n>CWMI loss achieves significant improvements in both pixel-wise accuracy and topological metrics compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-02-01T21:19:48Z) - Network scaling and scale-driven loss balancing for intelligent poroelastography [2.665036498336221]
A deep learning framework is developed for multiscale characterization of poroelastic media from full waveform data.
Two major challenges impede direct application of existing state-of-the-art techniques for this purpose.
We propose the idea of emphnetwork scaling where the neural property maps are constructed by unit shape functions composed into a scaling layer.
arXiv Detail & Related papers (2024-10-27T23:06:29Z) - WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - FourierLoss: Shape-Aware Loss Function with Fourier Descriptors [1.5659201748872393]
This work introduces a new shape-aware loss function, which we name FourierLoss.
It relies on the shape dissimilarity between the ground truth and the predicted segmentation maps through the Fourier descriptors calculated on their objects, and penalizing this dissimilarity in network training.
Experiments revealed that the proposed shape-aware loss function led to statistically significantly better results for liver segmentation, compared to its counterparts.
arXiv Detail & Related papers (2023-09-21T14:23:10Z) - A Generalized Surface Loss for Reducing the Hausdorff Distance in
Medical Imaging Segmentation [1.2289361708127877]
We propose a novel loss function to minimize Hausdorff-based metrics with more desirable numerical properties than current methods.
Our loss function outperforms other losses when tested on the LiTS and BraTS datasets using the state-of-the-art nnUNet architecture.
arXiv Detail & Related papers (2023-02-08T04:01:42Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - Generic Perceptual Loss for Modeling Structured Output Dependencies [78.59700528239141]
We show that, what matters is the network structure instead of the trained weights.
We demonstrate that a randomly-weighted deep CNN can be used to model the structured dependencies of outputs.
arXiv Detail & Related papers (2021-03-18T23:56:07Z) - Multi-Loss Weighting with Coefficient of Variations [19.37721431024278]
We propose a weighting scheme based on the coefficient of variations and set the weights based on properties observed while training the model.
The proposed method incorporates a measure of uncertainty to balance the losses, and as a result the loss weights evolve during training without requiring another (learning based) optimisation.
The validity of the approach is shown empirically for depth estimation and semantic segmentation on multiple datasets.
arXiv Detail & Related papers (2020-09-03T14:51:19Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - SoftAdapt: Techniques for Adaptive Loss Weighting of Neural Networks
with Multi-Part Loss Functions [1.2891210250935146]
We propose a family of methods, called SoftAdapt, that dynamically change function weights for multi-part loss functions.
SoftAdapt is mathematically intuitive, computationally efficient and straightforward to implement.
arXiv Detail & Related papers (2019-12-27T22:23:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.