Dynamic Label Injection for Imbalanced Industrial Defect Segmentation
- URL: http://arxiv.org/abs/2408.10031v1
- Date: Mon, 19 Aug 2024 14:24:46 GMT
- Title: Dynamic Label Injection for Imbalanced Industrial Defect Segmentation
- Authors: Emanuele Caruso, Francesco Pelosin, Alessandro Simoni, Marco Boschetti,
- Abstract summary: We propose a Dynamic Label Injection (DLI) algorithm to impose a uniform distribution in the input batch.
Our algorithm computes the current batch defect distribution and re-balances it by transferring defects using a combination of Poisson-based seamless image cloning and cut-paste techniques.
- Score: 42.841736467487785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a simple yet effective method to tackle the problem of imbalanced multi-class semantic segmentation in deep learning systems. One of the key properties for a good training set is the balancing among the classes. When the input distribution is heavily imbalanced in the number of instances, the learning process could be hindered or difficult to carry on. To this end, we propose a Dynamic Label Injection (DLI) algorithm to impose a uniform distribution in the input batch. Our algorithm computes the current batch defect distribution and re-balances it by transferring defects using a combination of Poisson-based seamless image cloning and cut-paste techniques. A thorough experimental section on the Magnetic Tiles dataset shows better results of DLI compared to other balancing loss approaches also in the challenging weakly-supervised setup. The code is available at https://github.com/covisionlab/dynamic-label-injection.git
Related papers
- Flexible Distribution Alignment: Towards Long-tailed Semi-supervised Learning with Proper Calibration [18.376601653387315]
Longtailed semi-supervised learning (LTSSL) represents a practical scenario for semi-supervised applications.
This problem is often aggravated by discrepancies between labeled and unlabeled class distributions.
We introduce Flexible Distribution Alignment (FlexDA), a novel adaptive logit-adjusted loss framework.
arXiv Detail & Related papers (2023-06-07T17:50:59Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised
Learning [103.65758569417702]
Semi-supervised learning (SSL) has shown great promise in leveraging unlabeled data to improve model performance.
We consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data.
We study a simple yet overlooked baseline -- SimiS -- which tackles data imbalance by simply supplementing labeled data with pseudo-labels.
arXiv Detail & Related papers (2022-11-20T21:18:41Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Posterior Re-calibration for Imbalanced Datasets [33.379680556475314]
Neural Networks can perform poorly when the training label distribution is heavily imbalanced.
We derive a post-training prior rebalancing technique that can be solved through a KL-divergence based optimization.
Our results on six different datasets and five different architectures show state of art accuracy.
arXiv Detail & Related papers (2020-10-22T15:57:14Z) - Distribution Aligning Refinery of Pseudo-label for Imbalanced
Semi-supervised Learning [126.31716228319902]
We develop Distribution Aligning Refinery of Pseudo-label (DARP) algorithm.
We show that DARP is provably and efficiently compatible with state-of-the-art SSL schemes.
arXiv Detail & Related papers (2020-07-17T09:16:05Z) - Deep Active Learning for Biased Datasets via Fisher Kernel
Self-Supervision [5.352699766206807]
Active learning (AL) aims to minimize labeling efforts for data-demanding deep neural networks (DNNs)
We propose a low-complexity method for feature density matching using self-supervised Fisher kernel (FK)
Our method outperforms state-of-the-art methods on MNIST, SVHN, and ImageNet classification while requiring only 1/10th of processing.
arXiv Detail & Related papers (2020-03-01T03:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.