DALSA: Domain Adaptation for Supervised Learning From Sparsely Annotated
MR Images
- URL: http://arxiv.org/abs/2403.07434v1
- Date: Tue, 12 Mar 2024 09:17:21 GMT
- Title: DALSA: Domain Adaptation for Supervised Learning From Sparsely Annotated
MR Images
- Authors: Michael G\"otz, Christian Weber, Franciszek Binczyk, Joanna Polanska,
Rafal Tarnawski, Barbara Bobek-Billewicz, Ullrich K\"othe, Jens Kleesiek,
Bram Stieltjes, Klaus H. Maier-Hein
- Abstract summary: We propose a new method that employs transfer learning techniques to correct sampling selection errors introduced by sparse annotations during supervised learning for automated tumor segmentation.
The proposed method derives high-quality classifiers for the different tissue classes from sparse and unambiguous annotations.
Compared to training on fully labeled data, we reduced the time for labeling and training by a factor greater than 70 and 180 respectively without sacrificing accuracy.
- Score: 2.352695945685781
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a new method that employs transfer learning techniques to
effectively correct sampling selection errors introduced by sparse annotations
during supervised learning for automated tumor segmentation. The practicality
of current learning-based automated tissue classification approaches is
severely impeded by their dependency on manually segmented training databases
that need to be recreated for each scenario of application, site, or
acquisition setup. The comprehensive annotation of reference datasets can be
highly labor-intensive, complex, and error-prone. The proposed method derives
high-quality classifiers for the different tissue classes from sparse and
unambiguous annotations and employs domain adaptation techniques for
effectively correcting sampling selection errors introduced by the sparse
sampling. The new approach is validated on labeled, multi-modal MR images of 19
patients with malignant gliomas and by comparative analysis on the BraTS 2013
challenge data sets. Compared to training on fully labeled data, we reduced the
time for labeling and training by a factor greater than 70 and 180 respectively
without sacrificing accuracy. This dramatically eases the establishment and
constant extension of large annotated databases in various scenarios and
imaging setups and thus represents an important step towards practical
applicability of learning-based approaches in tissue classification.
Related papers
- Enhancing Image Classification in Small and Unbalanced Datasets through Synthetic Data Augmentation [0.0]
This paper introduces a novel synthetic augmentation strategy using class-specific Variational Autoencoders (VAEs) and latent space to improve discrimination capabilities.
By generating realistic, varied synthetic data that fills feature space gaps, we address issues of data scarcity and class imbalance.
The proposed strategy was tested in a small dataset of 321 images created to train and validate an automatic method for assessing the quality of cleanliness of esophagogastroduodenoscopy images.
arXiv Detail & Related papers (2024-09-16T13:47:52Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Domain Adaptive Multiple Instance Learning for Instance-level Prediction
of Pathological Images [45.132775668689604]
We propose a new task setting to improve the classification performance of the target dataset without increasing annotation costs.
In order to combine the supervisory information of both methods effectively, we propose a method to create pseudo-labels with high confidence.
arXiv Detail & Related papers (2023-04-07T08:31:06Z) - Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer
Radiation Treatment from Clinically Available Annotations [0.0]
We present an approach to learn a deep learning model for the automatic segmentation of Organs at Risk (OARs) in cervical cancer radiation treatment.
We employ simples for automatic data cleaning to minimize data inhomogeneity, label noise, and missing annotations.
We develop a semi-supervised learning approach utilizing a teacher-student setup, annotation imputation, and uncertainty-guided training to learn in presence of missing annotations.
arXiv Detail & Related papers (2023-02-21T13:24:40Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised
Person Re-Identification and Text Authorship Attribution [77.85461690214551]
Learning from fully-unlabeled data is challenging in Multimedia Forensics problems, such as Person Re-Identification and Text Authorship Attribution.
Recent self-supervised learning methods have shown to be effective when dealing with fully-unlabeled data in cases where the underlying classes have significant semantic differences.
We propose a strategy to tackle Person Re-Identification and Text Authorship Attribution by enabling learning from unlabeled data even when samples from different classes are not prominently diverse.
arXiv Detail & Related papers (2022-02-07T13:08:11Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.