Detection of Under-represented Samples Using Dynamic Batch Training for Brain Tumor Segmentation from MR Images
- URL: http://arxiv.org/abs/2408.12013v1
- Date: Wed, 21 Aug 2024 21:51:47 GMT
- Title: Detection of Under-represented Samples Using Dynamic Batch Training for Brain Tumor Segmentation from MR Images
- Authors: Subin Sahayam, John Michael Sujay Zakkam, Yoga Sri Varshan V, Umarani Jayaraman,
- Abstract summary: Brain tumors in magnetic resonance imaging (MR) are difficult, time-consuming, and prone to human error.
These challenges can be resolved by developing automatic brain tumor segmentation methods from MR images.
Various deep-learning models based on the U-Net have been proposed for the task.
These deep-learning models are trained on a dataset of tumor images and then used for segmenting the masks.
- Score: 0.8437187555622164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Brain tumors in magnetic resonance imaging (MR) are difficult, time-consuming, and prone to human error. These challenges can be resolved by developing automatic brain tumor segmentation methods from MR images. Various deep-learning models based on the U-Net have been proposed for the task. These deep-learning models are trained on a dataset of tumor images and then used for segmenting the masks. Mini-batch training is a widely used method in deep learning for training. However, one of the significant challenges associated with this approach is that if the training dataset has under-represented samples or samples with complex latent representations, the model may not generalize well to these samples. The issue leads to skewed learning of the data, where the model learns to fit towards the majority representations while underestimating the under-represented samples. The proposed dynamic batch training method addresses the challenges posed by under-represented data points, data points with complex latent representation, and imbalances within the class, where some samples may be harder to learn than others. Poor performance of such samples can be identified only after the completion of the training, leading to the wastage of computational resources. Also, training easy samples after each epoch is an inefficient utilization of computation resources. To overcome these challenges, the proposed method identifies hard samples and trains such samples for more iterations compared to easier samples on the BraTS2020 dataset. Additionally, the samples trained multiple times are identified and it provides a way to identify hard samples in the BraTS2020 dataset. The comparison of the proposed training approach with U-Net and other models in the literature highlights the capabilities of the proposed training approach.
Related papers
- Probing Perfection: The Relentless Art of Meddling for Pulmonary Airway Segmentation from HRCT via a Human-AI Collaboration Based Active Learning Method [13.384578466263566]
In pulmonary tracheal segmentation, the scarcity of annotated data is a prevalent issue.
Deep Learning (DL) methods face challenges: the opacity of 'black box' models and the need for performance enhancement.
We address these challenges by combining diverse query strategies with various DL models.
arXiv Detail & Related papers (2024-07-03T23:27:53Z) - Semi-Supervised Learning for hyperspectral images by non parametrically
predicting view assignment [25.198550162904713]
Hyperspectral image (HSI) classification is gaining a lot of momentum in present time because of high inherent spectral information within the images.
Recently, to effectively train the deep learning models with minimal labelled samples, the unlabeled samples are also being leveraged in self-supervised and semi-supervised setting.
In this work, we leverage the idea of semi-supervised learning to assist the discriminative self-supervised pretraining of the models.
arXiv Detail & Related papers (2023-06-19T14:13:56Z) - A Data-Centric Approach for Improving Adversarial Training Through the
Lens of Out-of-Distribution Detection [0.4893345190925178]
We propose detecting and removing hard samples directly from the training procedure rather than applying complicated algorithms to mitigate their effects.
Our results on SVHN and CIFAR-10 datasets show the effectiveness of this method in improving the adversarial training without adding too much computational cost.
arXiv Detail & Related papers (2023-01-25T08:13:50Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Reducing Training Sample Memorization in GANs by Training with
Memorization Rejection [80.0916819303573]
We propose rejection memorization, a training scheme that rejects generated samples that are near-duplicates of training samples during training.
Our scheme is simple, generic and can be directly applied to any GAN architecture.
arXiv Detail & Related papers (2022-10-21T20:17:50Z) - DiscrimLoss: A Universal Loss for Hard Samples and Incorrect Samples
Discrimination [28.599571524763785]
Given data with label noise (i.e., incorrect data), deep neural networks would gradually memorize the label noise and impair model performance.
To relieve this issue, curriculum learning is proposed to improve model performance and generalization by ordering training samples in a meaningful sequence.
arXiv Detail & Related papers (2022-08-21T13:38:55Z) - BatchFormer: Learning to Explore Sample Relationships for Robust
Representation Learning [93.38239238988719]
We propose to enable deep neural networks with the ability to learn the sample relationships from each mini-batch.
BatchFormer is applied into the batch dimension of each mini-batch to implicitly explore sample relationships during training.
We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications.
arXiv Detail & Related papers (2022-03-03T05:31:33Z) - Uniform Sampling over Episode Difficulty [55.067544082168624]
We propose a method to approximate episode sampling distributions based on their difficulty.
As the proposed sampling method is algorithm agnostic, we can leverage these insights to improve few-shot learning accuracies.
arXiv Detail & Related papers (2021-08-03T17:58:54Z) - One for More: Selecting Generalizable Samples for Generalizable ReID
Model [92.40951770273972]
This paper proposes a one-for-more training objective that takes the generalization ability of selected samples as a loss function.
Our proposed one-for-more based sampler can be seamlessly integrated into the ReID training framework.
arXiv Detail & Related papers (2020-12-10T06:37:09Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Efficient Deep Representation Learning by Adaptive Latent Space Sampling [16.320898678521843]
Supervised deep learning requires a large amount of training samples with annotations, which are expensive and time-consuming to obtain.
We propose a novel training framework which adaptively selects informative samples that are fed to the training process.
arXiv Detail & Related papers (2020-03-19T22:17:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.