Brain Metastasis Segmentation Network Trained with Robustness to
Annotations with Multiple False Negatives
- URL: http://arxiv.org/abs/2001.09501v1
- Date: Sun, 26 Jan 2020 19:23:07 GMT
- Title: Brain Metastasis Segmentation Network Trained with Robustness to
Annotations with Multiple False Negatives
- Authors: Darvin Yi, Endre Gr{\o}vik, Michael Iv, Elizabeth Tong, Greg
Zaharchuk, Daniel Rubin
- Abstract summary: We develop a lopsided loss function that assumes the existence of a nontrivial false negative rate in the target annotations.
Even with a simulated false negative rate as high as 50%, applying our loss function to randomly censored data preserves maximum sensitivity at 97%.
Our work will enable more efficient scaling of the image labeling process.
- Score: 1.9031935295821718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has proven to be an essential tool for medical image analysis.
However, the need for accurately labeled input data, often requiring time- and
labor-intensive annotation by experts, is a major limitation to the use of deep
learning. One solution to this challenge is to allow for use of coarse or noisy
labels, which could permit more efficient and scalable labeling of images. In
this work, we develop a lopsided loss function based on entropy regularization
that assumes the existence of a nontrivial false negative rate in the target
annotations. Starting with a carefully annotated brain metastasis lesion
dataset, we simulate data with false negatives by (1) randomly censoring the
annotated lesions and (2) systematically censoring the smallest lesions. The
latter better models true physician error because smaller lesions are harder to
notice than the larger ones. Even with a simulated false negative rate as high
as 50%, applying our loss function to randomly censored data preserves maximum
sensitivity at 97% of the baseline with uncensored training data, compared to
just 10% for a standard loss function. For the size-based censorship,
performance is restored from 17% with the current standard to 88% with our
lopsided bootstrap loss. Our work will enable more efficient scaling of the
image labeling process, in parallel with other approaches on creating more
efficient user interfaces and tools for annotation.
Related papers
- Weakly Semi-supervised Tool Detection in Minimally Invasive Surgery
Videos [11.61305113932032]
Surgical tool detection is essential for analyzing and evaluating minimally invasive surgery videos.
Large image datasets with instance-level labels are often limited because of the burden of annotation.
In this work, we propose to strike a balance between the extremely costly annotation burden and detection performance.
arXiv Detail & Related papers (2024-01-05T13:05:02Z) - Robust T-Loss for Medical Image Segmentation [56.524774292536264]
This paper presents a new robust loss function, the T-Loss, for medical image segmentation.
The proposed loss is based on the negative log-likelihood of the Student-t distribution and can effectively handle outliers in the data.
Our experiments show that the T-Loss outperforms traditional loss functions in terms of dice scores on two public medical datasets.
arXiv Detail & Related papers (2023-06-01T14:49:40Z) - An End-to-End Framework For Universal Lesion Detection With Missing
Annotations [24.902835211573628]
We present a novel end-to-end framework for mining unlabeled lesions while simultaneously training the detector.
Our framework follows the teacher-student paradigm. High-confidence predictions are combined with partially-labeled ground truth for training the student model.
arXiv Detail & Related papers (2023-03-27T09:16:10Z) - Weakly Supervised Medical Image Segmentation With Soft Labels and Noise
Robust Loss [0.16490701092527607]
Training deep learning models commonly requires large datasets with expert-labeled annotations.
Image-based medical diagnosis tools using deep learning models trained with incorrect segmentation labels can lead to false diagnoses and treatment suggestions.
The aim of this paper was to develop and evaluate a method to generate probabilistic labels based on multi-rater annotations and anatomical knowledge of the lesion features in MRI.
arXiv Detail & Related papers (2022-09-16T21:07:59Z) - Seamless Iterative Semi-Supervised Correction of Imperfect Labels in
Microscopy Images [57.42492501915773]
In-vitro tests are an alternative to animal testing for the toxicity of medical devices.
Human fatigue plays a role in error making, making the use of deep learning appealing.
We propose Seamless Iterative Semi-Supervised correction of Imperfect labels (SISSI)
Our method successfully provides an adaptive early learning correction technique for object detection.
arXiv Detail & Related papers (2022-08-05T18:52:20Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Debiased Pseudo Labeling in Self-Training [77.83549261035277]
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets.
To mitigate the requirement for labeled data, self-training is widely used in both academia and industry by pseudo labeling on readily-available unlabeled data.
We propose Debiased, in which the generation and utilization of pseudo labels are decoupled by two independent heads.
arXiv Detail & Related papers (2022-02-15T02:14:33Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - MAGNeto: An Efficient Deep Learning Method for the Extractive Tags
Summarization Problem [0.0]
We study a new image annotation task named Extractive Tags Summarization (ETS)
The goal is to extract important tags from the context lying in an image and its corresponding tags.
Our proposed solution consists of different widely used blocks like convolutional and self-attention layers.
arXiv Detail & Related papers (2020-11-09T11:34:21Z) - Don't Wait, Just Weight: Improving Unsupervised Representations by
Learning Goal-Driven Instance Weights [92.16372657233394]
Self-supervised learning techniques can boost performance by learning useful representations from unlabelled data.
We show that by learning Bayesian instance weights for the unlabelled data, we can improve the downstream classification accuracy.
Our method, BetaDataWeighter is evaluated using the popular self-supervised rotation prediction task on STL-10 and Visual Decathlon.
arXiv Detail & Related papers (2020-06-22T15:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.