Fine-grained Classification of Solder Joints with {\alpha}-skew
Jensen-Shannon Divergence
- URL: http://arxiv.org/abs/2209.09857v1
- Date: Tue, 20 Sep 2022 17:06:51 GMT
- Title: Fine-grained Classification of Solder Joints with {\alpha}-skew
Jensen-Shannon Divergence
- Authors: Furkan Ulger, Seniha Esen Yuksel, Atila Yilmaz, and Dincer Gokcen
- Abstract summary: We show that solders have low feature diversity, and that the solder joint inspection can be carried out as a fine-grained image classification task.
To improve the fine-grained classification accuracy, penalizing confident model predictions by maximizing entropy was found useful in the literature.
We show that the proposed approach achieves the highest F1-score and competitive accuracy for different models in the finegrained solder joint classification task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Solder joint inspection (SJI) is a critical process in the production of
printed circuit boards (PCB). Detection of solder errors during SJI is quite
challenging as the solder joints have very small sizes and can take various
shapes. In this study, we first show that solders have low feature diversity,
and that the SJI can be carried out as a fine-grained image classification task
which focuses on hard-to-distinguish object classes. To improve the
fine-grained classification accuracy, penalizing confident model predictions by
maximizing entropy was found useful in the literature. Inline with this
information, we propose using the {\alpha}-skew Jensen-Shannon divergence
({\alpha}-JS) for penalizing the confidence in model predictions. We compare
the {\alpha}-JS regularization with both existing entropyregularization based
methods and the methods based on attention mechanism, segmentation techniques,
transformer models, and specific loss functions for fine-grained image
classification tasks. We show that the proposed approach achieves the highest
F1-score and competitive accuracy for different models in the finegrained
solder joint classification task. Finally, we visualize the activation maps and
show that with entropy-regularization, more precise class-discriminative
regions are localized, which are also more resilient to noise. Code will be
made available here upon acceptance.
Related papers
- Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Rank-DETR for High Quality Object Detection [52.82810762221516]
A highly performant object detector requires accurate ranking for the bounding box predictions.
In this work, we introduce a simple and highly performant DETR-based object detector by proposing a series of rank-oriented designs.
arXiv Detail & Related papers (2023-10-13T04:48:32Z) - Model Calibration in Dense Classification with Adaptive Label
Perturbation [44.62722402349157]
Existing dense binary classification models are prone to being over-confident.
We propose Adaptive Label Perturbation (ASLP) which learns a unique label perturbation level for each training image.
ASLP can significantly improve calibration degrees of dense binary classification models on both in-distribution and out-of-distribution data.
arXiv Detail & Related papers (2023-07-25T14:40:11Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - GSC Loss: A Gaussian Score Calibrating Loss for Deep Learning [16.260520216972854]
We propose a general Gaussian Score Calibrating (GSC) loss to calibrate the predicted scores produced by the deep neural networks (DNN)
Extensive experiments on over 10 benchmark datasets demonstrate that the proposed GSC loss can yield consistent and significant performance boosts in a variety of visual tasks.
arXiv Detail & Related papers (2022-03-02T02:52:23Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Bayesian Few-Shot Classification with One-vs-Each P\'olya-Gamma
Augmented Gaussian Processes [7.6146285961466]
Few-shot classification (FSC) is an important step on the path toward human-like machine learning.
We propose a novel combination of P'olya-Gamma augmentation and the one-vs-each softmax approximation that allows us to efficiently marginalize over functions rather than model parameters.
We demonstrate improved accuracy and uncertainty quantification on both standard few-shot classification benchmarks and few-shot domain transfer tasks.
arXiv Detail & Related papers (2020-07-20T19:10:41Z) - Self-Knowledge Distillation with Progressive Refinement of Targets [1.1470070927586016]
We propose a simple yet effective regularization method named progressive self-knowledge distillation (PS-KD)
PS-KD progressively distills a model's own knowledge to soften hard targets during training.
We show that PS-KD provides an effect of hard example mining by rescaling gradients according to difficulty in classifying examples.
arXiv Detail & Related papers (2020-06-22T04:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.