FourierLoss: Shape-Aware Loss Function with Fourier Descriptors
- URL: http://arxiv.org/abs/2309.12106v1
- Date: Thu, 21 Sep 2023 14:23:10 GMT
- Title: FourierLoss: Shape-Aware Loss Function with Fourier Descriptors
- Authors: Mehmet Bahadir Erden, Selahattin Cansiz, Onur Caki, Haya Khattak,
Durmus Etiz, Melek Cosar Yakar, Kerem Duruer, Berke Barut and Cigdem
Gunduz-Demir
- Abstract summary: This work introduces a new shape-aware loss function, which we name FourierLoss.
It relies on the shape dissimilarity between the ground truth and the predicted segmentation maps through the Fourier descriptors calculated on their objects, and penalizing this dissimilarity in network training.
Experiments revealed that the proposed shape-aware loss function led to statistically significantly better results for liver segmentation, compared to its counterparts.
- Score: 1.5659201748872393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Encoder-decoder networks become a popular choice for various medical image
segmentation tasks. When they are trained with a standard loss function, these
networks are not explicitly enforced to preserve the shape integrity of an
object in an image. However, this ability of the network is important to obtain
more accurate results, especially when there is a low-contrast difference
between the object and its surroundings. In response to this issue, this work
introduces a new shape-aware loss function, which we name FourierLoss. This
loss function relies on quantifying the shape dissimilarity between the ground
truth and the predicted segmentation maps through the Fourier descriptors
calculated on their objects, and penalizing this dissimilarity in network
training. Different than the previous studies, FourierLoss offers an adaptive
loss function with trainable hyperparameters that control the importance of the
level of the shape details that the network is enforced to learn in the
training process. This control is achieved by the proposed adaptive loss update
mechanism, which end-to-end learns the hyperparameters simultaneously with the
network weights by backpropagation. As a result of using this mechanism, the
network can dynamically change its attention from learning the general outline
of an object to learning the details of its contour points, or vice versa, in
different training epochs. Working on 2879 computed tomography images of 93
subjects, our experiments revealed that the proposed adaptive shape-aware loss
function led to statistically significantly better results for liver
segmentation, compared to its counterparts.
Related papers
- Unraveling the Hessian: A Key to Smooth Convergence in Loss Function Landscapes [0.0]
We theoretically analyze the convergence of the loss landscape in a fully connected neural network and derive upper bounds for the difference in loss function values when adding a new object to the sample.
Our empirical study confirms these results on various datasets, demonstrating the convergence of the loss function surface for image classification tasks.
arXiv Detail & Related papers (2024-09-18T14:04:15Z) - WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - Misalignment-Robust Frequency Distribution Loss for Image Transformation [51.0462138717502]
This paper aims to address a common challenge in deep learning-based image transformation methods, such as image enhancement and super-resolution.
We introduce a novel and simple Frequency Distribution Loss (FDL) for computing distribution distance within the frequency domain.
Our method is empirically proven effective as a training constraint due to the thoughtful utilization of global information in the frequency domain.
arXiv Detail & Related papers (2024-02-28T09:27:41Z) - Alternate Loss Functions for Classification and Robust Regression Can Improve the Accuracy of Artificial Neural Networks [6.452225158891343]
This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks.
Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed.
arXiv Detail & Related papers (2023-03-17T12:52:06Z) - A Systematic Performance Analysis of Deep Perceptual Loss Networks: Breaking Transfer Learning Conventions [5.470136744581653]
Deep perceptual loss is a type of loss function for images that computes the error between two images as the distance between deep features extracted from a neural network.
This work evaluates the effect of different pretrained loss networks on four different application areas.
arXiv Detail & Related papers (2023-02-08T13:08:51Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Learning sparse features can lead to overfitting in neural networks [9.2104922520782]
We show that feature learning can perform worse than lazy training.
Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth.
arXiv Detail & Related papers (2022-06-24T14:26:33Z) - Generic Perceptual Loss for Modeling Structured Output Dependencies [78.59700528239141]
We show that, what matters is the network structure instead of the trained weights.
We demonstrate that a randomly-weighted deep CNN can be used to model the structured dependencies of outputs.
arXiv Detail & Related papers (2021-03-18T23:56:07Z) - Why Do Better Loss Functions Lead to Less Transferable Features? [93.47297944685114]
This paper studies how the choice of training objective affects the transferability of the hidden representations of convolutional neural networks trained on ImageNet.
We show that many objectives lead to statistically significant improvements in ImageNet accuracy over vanilla softmax cross-entropy, but the resulting fixed feature extractors transfer substantially worse to downstream tasks.
arXiv Detail & Related papers (2020-10-30T17:50:31Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.