Deep Learning Model Explainability for Inspection Accuracy Improvement
in the Automotive Industry
- URL: http://arxiv.org/abs/2110.03384v1
- Date: Thu, 7 Oct 2021 12:23:00 GMT
- Title: Deep Learning Model Explainability for Inspection Accuracy Improvement
in the Automotive Industry
- Authors: Anass El Houd, Charbel El Hachem, Loic Painvin
- Abstract summary: This work aims to apprehend and emphasize the contribution of deep learning model explainability to the improvement of welding seams classification accuracy and reliability.
We implement a novel hybrid method that relies on combining the model prediction scores and visual explanation heatmap of the model.
The results show that the hybrid model performance is relatively above our target performance and helps to increase the accuracy by at least 18%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The welding seams visual inspection is still manually operated by humans in
different companies, so the result of the test is still highly subjective and
expensive. At present, the integration of deep learning methods for welds
classification is a research focus in engineering applications. This work
intends to apprehend and emphasize the contribution of deep learning model
explainability to the improvement of welding seams classification accuracy and
reliability, two of the various metrics affecting the production lines and cost
in the automotive industry. For this purpose, we implement a novel hybrid
method that relies on combining the model prediction scores and visual
explanation heatmap of the model in order to make a more accurate
classification of welding seam defects and improve both its performance and its
reliability. The results show that the hybrid model performance is relatively
above our target performance and helps to increase the accuracy by at least
18%, which presents new perspectives to the developments of deep Learning
explainability and interpretability.
Related papers
- Explainability-Driven Leaf Disease Classification Using Adversarial
Training and Knowledge Distillation [2.2823100315094624]
This work focuses on plant leaf disease classification and explores three crucial aspects: adversarial training, model explainability, and model compression.
The robustness can be the price of the classification accuracy with performance reductions of 3%-20% for regular tests and gains of 50%-70% for adversarial attack tests.
arXiv Detail & Related papers (2023-12-30T21:48:20Z) - On the Calibration of Large Language Models and Alignment [63.605099174744865]
Confidence calibration serves as a crucial tool for gauging the reliability of deep models.
We conduct a systematic examination of the calibration of aligned language models throughout the entire construction process.
Our work sheds light on whether popular LLMs are well-calibrated and how the training process influences model calibration.
arXiv Detail & Related papers (2023-11-22T08:57:55Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Wafer Map Defect Patterns Semi-Supervised Classification Using Latent
Vector Representation [8.400553138721044]
The demand for defect detection during integrated circuit fabrication stages is becoming increasingly critical.
Traditional wafer map defect pattern detection methods involve manual inspection using electron microscopes.
We propose a model capable of automatically detecting defects as an alternative to manual operations.
arXiv Detail & Related papers (2023-10-06T08:23:36Z) - On the Importance of Calibration in Semi-supervised Learning [13.859032326378188]
State-of-the-art (SOTA) semi-supervised learning (SSL) methods have been highly successful in leveraging a mix of labeled and unlabeled data.
We introduce a family of new SSL models that optimize for calibration and demonstrate their effectiveness across standard vision benchmarks.
arXiv Detail & Related papers (2022-10-10T15:41:44Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Knowledge Distillation as Semiparametric Inference [44.572422527672416]
A popular approach to model compression is to train an inexpensive student model to mimic the class probabilities of a highly accurate but cumbersome teacher model.
This two-step knowledge distillation process often leads to higher accuracy than training the student directly on labeled data.
We cast knowledge distillation as a semiparametric inference problem with the optimal student model as the target, the unknown Bayes class probabilities as nuisance, and the teacher probabilities as a plug-in nuisance estimate.
arXiv Detail & Related papers (2021-04-20T03:00:45Z) - Classification of Spot-welded Joints in Laser Thermography Data using
Convolutional Neural Networks [52.661521064098416]
We propose an approach for quality inspection of spot weldings using images from laser thermography data.
We use convolutional neural networks to classify weld quality and compare the performance of different models against each other.
arXiv Detail & Related papers (2020-10-24T20:38:12Z) - Hybrid Discriminative-Generative Training via Contrastive Learning [96.56164427726203]
We show that through the perspective of hybrid discriminative-generative training of energy-based models we can make a direct connection between contrastive learning and supervised learning.
We show our specific choice of approximation of the energy-based loss outperforms the existing practice in terms of classification accuracy of WideResNet on CIFAR-10 and CIFAR-100.
arXiv Detail & Related papers (2020-07-17T15:50:34Z) - Circumventing Outliers of AutoAugment with Knowledge Distillation [102.25991455094832]
AutoAugment has been a powerful algorithm that improves the accuracy of many vision tasks.
This paper delves deep into the working mechanism, and reveals that AutoAugment may remove part of discriminative information from the training image.
To relieve the inaccuracy of supervision, we make use of knowledge distillation that refers to the output of a teacher model to guide network training.
arXiv Detail & Related papers (2020-03-25T11:51:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.