HyTver: A Novel Loss Function for Longitudinal Multiple Sclerosis Lesion Segmentation
- URL: http://arxiv.org/abs/2508.17639v1
- Date: Mon, 25 Aug 2025 04:01:28 GMT
- Title: HyTver: A Novel Loss Function for Longitudinal Multiple Sclerosis Lesion Segmentation
- Authors: Dayan Perera, Ting Fung Fung, Vishnu Monn,
- Abstract summary: We propose a novel hybrid loss called HyTver that achieves good segmentation performance while maintaining performance in other metrics.<n>We achieve a Dice score of 0.659 while also ensuring that the distance-based metrics are comparable to other popular functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Longitudinal Multiple Sclerosis Lesion Segmentation is a particularly challenging problem that involves both input and output imbalance in the data and segmentation. Therefore in order to develop models that are practical, one of the solutions is to develop better loss functions. Most models naively use either Dice loss or Cross-Entropy loss or their combination without too much consideration. However, one must select an appropriate loss function as the imbalance can be mitigated by selecting a proper loss function. In order to solve the imbalance problem, multiple loss functions were proposed that claimed to solve it. They come with problems of their own which include being too computationally complex due to hyperparameters as exponents or having detrimental performance in metrics other than region-based ones. We propose a novel hybrid loss called HyTver that achieves good segmentation performance while maintaining performance in other metrics. We achieve a Dice score of 0.659 while also ensuring that the distance-based metrics are comparable to other popular functions. In addition, we also evaluate the stability of the loss functions when used on a pre- trained model and perform extensive comparisons with other popular loss functions
Related papers
- Variation-Bounded Loss for Noise-Tolerant Learning [105.20373602308284]
We introduce the Variation Ratio as a novel property related to the robustness of loss functions.<n>We propose a new family of robust loss functions, termed Variation-Bounded Loss (VBL), which is characterized by a bounded variation ratio.
arXiv Detail & Related papers (2025-11-15T10:15:29Z) - DL101 Neural Network Outputs and Loss Functions [51.77969450792284]
Loss function used to train a neural network is strongly connected to its output layer from a statistical point of view.<n>Report analyzes common activation functions for a neural network output layer, like linear, sigmoid, ReLU, and softmax.
arXiv Detail & Related papers (2025-11-07T10:20:45Z) - ASRL:A robust loss function with potential for development [4.292888620805875]
We propose a partition:wise robust loss function based on the previous robust loss function.<n>The characteristics of this loss function are that it achieves high robustness and a wide range of applicability.
arXiv Detail & Related papers (2025-04-09T14:40:46Z) - AnyLoss: Transforming Classification Metrics into Loss Functions [21.34290540936501]
evaluation metrics can be used to assess the performance of models in binary classification tasks.
Most metrics are derived from a confusion matrix in a non-differentiable form, making it difficult to generate a differentiable loss function that could directly optimize them.
We propose a general-purpose approach that transforms any confusion matrix-based metric into a loss function, textitAnyLoss, that is available in optimization processes.
arXiv Detail & Related papers (2024-05-23T16:14:16Z) - Tuned Contrastive Learning [77.67209954169593]
We propose a novel contrastive loss function -- Tuned Contrastive Learning (TCL) loss.
TCL generalizes to multiple positives and negatives in a batch and offers parameters to tune and improve the gradient responses from hard positives and hard negatives.
We show how to extend TCL to self-supervised setting and empirically compare it with various SOTA self-supervised learning methods.
arXiv Detail & Related papers (2023-05-18T03:26:37Z) - A Generalized Surface Loss for Reducing the Hausdorff Distance in
Medical Imaging Segmentation [1.2289361708127877]
We propose a novel loss function to minimize Hausdorff-based metrics with more desirable numerical properties than current methods.
Our loss function outperforms other losses when tested on the LiTS and BraTS datasets using the state-of-the-art nnUNet architecture.
arXiv Detail & Related papers (2023-02-08T04:01:42Z) - Xtreme Margin: A Tunable Loss Function for Binary Classification
Problems [0.0]
We provide an overview of a novel loss function, the Xtreme Margin loss function.
Unlike the binary cross-entropy and the hinge loss functions, this loss function provides researchers and practitioners flexibility with their training process.
arXiv Detail & Related papers (2022-10-31T22:39:32Z) - Hybridised Loss Functions for Improved Neural Network Generalisation [0.0]
Loss functions play an important role in the training of artificial neural networks (ANNs)
It has been shown that the cross entropy and sum squared error loss functions result in different training dynamics.
A hybrid of the entropy and sum squared error loss functions could combine the advantages of the two functions, while limiting their disadvantages.
arXiv Detail & Related papers (2022-04-26T11:52:11Z) - Do Lessons from Metric Learning Generalize to Image-Caption Retrieval? [67.45267657995748]
The triplet loss with semi-hard negatives has become the de facto choice for image-caption retrieval (ICR) methods that are optimized from scratch.
Recent progress in metric learning has given rise to new loss functions that outperform the triplet loss on tasks such as image retrieval and representation learning.
We ask whether these findings generalize to the setting of ICR by comparing three loss functions on two ICR methods.
arXiv Detail & Related papers (2022-02-14T15:18:00Z) - Asymmetric Loss Functions for Learning with Noisy Labels [82.50250230688388]
We propose a new class of loss functions, namely textitasymmetric loss functions, which are robust to learning with noisy labels for various types of noise.
Experimental results on benchmark datasets demonstrate that asymmetric loss functions can outperform state-of-the-art methods.
arXiv Detail & Related papers (2021-06-06T12:52:48Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation [56.343646789922545]
We propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric.
Experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently.
arXiv Detail & Related papers (2020-10-15T17:59:08Z) - Normalized Loss Functions for Deep Learning with Noisy Labels [39.32101898670049]
We show that the commonly used Cross Entropy (CE) loss is not robust to noisy labels.
We propose a framework to build robust loss functions called Active Passive Loss (APL)
arXiv Detail & Related papers (2020-06-24T08:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.