Robust T-Loss for Medical Image Segmentation
- URL: http://arxiv.org/abs/2306.00753v1
- Date: Thu, 1 Jun 2023 14:49:40 GMT
- Title: Robust T-Loss for Medical Image Segmentation
- Authors: Alvaro Gonzalez-Jimenez, Simone Lionetti, Philippe Gottfrois, Fabian
Gr\"oger, Marc Pouly, Alexander Navarini
- Abstract summary: This paper presents a new robust loss function, the T-Loss, for medical image segmentation.
The proposed loss is based on the negative log-likelihood of the Student-t distribution and can effectively handle outliers in the data.
Our experiments show that the T-Loss outperforms traditional loss functions in terms of dice scores on two public medical datasets.
- Score: 56.524774292536264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a new robust loss function, the T-Loss, for medical image
segmentation. The proposed loss is based on the negative log-likelihood of the
Student-t distribution and can effectively handle outliers in the data by
controlling its sensitivity with a single parameter. This parameter is updated
during the backpropagation process, eliminating the need for additional
computation or prior information about the level and spread of noisy labels.
Our experiments show that the T-Loss outperforms traditional loss functions in
terms of dice scores on two public medical datasets for skin lesion and lung
segmentation. We also demonstrate the ability of T-Loss to handle different
types of simulated label noise, resembling human error. Our results provide
strong evidence that the T-Loss is a promising alternative for medical image
segmentation where high levels of noise or outliers in the dataset are a
typical phenomenon in practice. The project website can be found at
https://robust-tloss.github.io
Related papers
- The Fisher-Rao Loss for Learning under Label Noise [9.238700679836855]
We study the Fisher-Rao loss function, which emerges from the Fisher-Rao distance in the statistical manifold of discrete distributions.
We derive an upper bound for the performance degradation in the presence of label noise, and analyse the learning speed of this loss.
arXiv Detail & Related papers (2022-10-28T20:50:10Z) - What can we learn about a generated image corrupting its latent
representation? [57.1841740328509]
We investigate the hypothesis that we can predict image quality based on its latent representation in the GANs bottleneck.
We achieve this by corrupting the latent representation with noise and generating multiple outputs.
arXiv Detail & Related papers (2022-10-12T14:40:32Z) - Weakly Supervised Medical Image Segmentation With Soft Labels and Noise
Robust Loss [0.16490701092527607]
Training deep learning models commonly requires large datasets with expert-labeled annotations.
Image-based medical diagnosis tools using deep learning models trained with incorrect segmentation labels can lead to false diagnoses and treatment suggestions.
The aim of this paper was to develop and evaluate a method to generate probabilistic labels based on multi-rater annotations and anatomical knowledge of the lesion features in MRI.
arXiv Detail & Related papers (2022-09-16T21:07:59Z) - On the Optimal Combination of Cross-Entropy and Soft Dice Losses for
Lesion Segmentation with Out-of-Distribution Robustness [15.08731999725517]
We study the impact of different loss functions on lesion segmentation from medical images.
We analyze the impact of the minimization of different loss functions on in-distribution performance.
Our findings are surprising: CE-Dice loss combinations that excel in segmenting in-distribution images have a poor performance when dealing with Out-of-Distribution data.
arXiv Detail & Related papers (2022-09-13T15:32:32Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - FedMed-ATL: Misaligned Unpaired Brain Image Synthesis via Affine
Transform Loss [58.58979566599889]
We propose a novel self-supervised learning (FedMed) for brain image synthesis.
An affine transform loss (ATL) was formulated to make use of severely distorted images without violating privacy legislation.
The proposed method demonstrates advanced performance in both the quality of synthesized results under a severely misaligned and unpaired data setting.
arXiv Detail & Related papers (2022-01-29T13:45:39Z) - Learning from Noisy Labels via Dynamic Loss Thresholding [69.61904305229446]
We propose a novel method named Dynamic Loss Thresholding (DLT)
During the training process, DLT records the loss value of each sample and calculates dynamic loss thresholds.
Experiments on CIFAR-10/100 and Clothing1M demonstrate substantial improvements over recent state-of-the-art methods.
arXiv Detail & Related papers (2021-04-01T07:59:03Z) - Matthews Correlation Coefficient Loss for Deep Convolutional Networks:
Application to Skin Lesion Segmentation [19.673662082910766]
Deep learning-based models are susceptible to class imbalance in the data.
We propose a novel metric-based loss function using the Matthews correlation coefficient, a metric that has been shown to be efficient in scenarios with skewed class distributions.
We show that the proposed loss function outperform those trained using Dice loss by 11.25%, 4.87%, and 0.76% respectively in the mean Jaccard index.
arXiv Detail & Related papers (2020-10-26T09:50:25Z) - Brain Metastasis Segmentation Network Trained with Robustness to
Annotations with Multiple False Negatives [1.9031935295821718]
We develop a lopsided loss function that assumes the existence of a nontrivial false negative rate in the target annotations.
Even with a simulated false negative rate as high as 50%, applying our loss function to randomly censored data preserves maximum sensitivity at 97%.
Our work will enable more efficient scaling of the image labeling process.
arXiv Detail & Related papers (2020-01-26T19:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.