An Uncertainty-Aware Deep Learning Framework for Defect Detection in
Casting Products
- URL: http://arxiv.org/abs/2107.11643v1
- Date: Sat, 24 Jul 2021 16:17:20 GMT
- Title: An Uncertainty-Aware Deep Learning Framework for Defect Detection in
Casting Products
- Authors: Maryam Habibpour, Hassan Gharoun, AmirReza Tajally, Afshar Shamsi,
Hamzeh Asgharnezhad, Abbas Khosravi, and Saeid Nahavandi
- Abstract summary: Defects are unavoidable in casting production owing to the complexity of the casting process.
CNNs have been widely applied in both image classification and defect detection tasks.
CNNs with frequentist inference require a massive amount of data to train on and still fall short in reporting beneficial estimates of their predictive uncertainty.
- Score: 11.792984988875157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defects are unavoidable in casting production owing to the complexity of the
casting process. While conventional human-visual inspection of casting products
is slow and unproductive in mass productions, an automatic and reliable defect
detection not just enhances the quality control process but positively improves
productivity. However, casting defect detection is a challenging task due to
diversity and variation in defects' appearance. Convolutional neural networks
(CNNs) have been widely applied in both image classification and defect
detection tasks. Howbeit, CNNs with frequentist inference require a massive
amount of data to train on and still fall short in reporting beneficial
estimates of their predictive uncertainty. Accordingly, leveraging the transfer
learning paradigm, we first apply four powerful CNN-based models (VGG16,
ResNet50, DenseNet121, and InceptionResNetV2) on a small dataset to extract
meaningful features. Extracted features are then processed by various machine
learning algorithms to perform the classification task. Simulation results
demonstrate that linear support vector machine (SVM) and multi-layer perceptron
(MLP) show the finest performance in defect detection of casting images.
Secondly, to achieve a reliable classification and to measure epistemic
uncertainty, we employ an uncertainty quantification (UQ) technique (ensemble
of MLP models) using features extracted from four pre-trained CNNs. UQ
confusion matrix and uncertainty accuracy metric are also utilized to evaluate
the predictive uncertainty estimates. Comprehensive comparisons reveal that UQ
method based on VGG16 outperforms others to fetch uncertainty. We believe an
uncertainty-aware automatic defect detection solution will reinforce casting
productions quality assurance.
Related papers
- Error-Driven Uncertainty Aware Training [7.702016079410588]
Error-Driven Uncertainty Aware Training aims to enhance the ability of neural classifiers to estimate their uncertainty correctly.
The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted.
We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings.
arXiv Detail & Related papers (2024-05-02T11:48:14Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - CINFormer: Transformer network with multi-stage CNN feature injection
for surface defect segmentation [73.02218479926469]
We propose a transformer network with multi-stage CNN feature injection for surface defect segmentation.
CINFormer presents a simple yet effective feature integration mechanism that injects the multi-level CNN features of the input image into different stages of the transformer network in the encoder.
In addition, CINFormer presents a Top-K self-attention module to focus on tokens with more important information about the defects.
arXiv Detail & Related papers (2023-09-22T06:12:02Z) - ScatterUQ: Interactive Uncertainty Visualizations for Multiclass Deep Learning Problems [0.0]
ScatterUQ is an interactive system that provides targeted visualizations to allow users to better understand model performance in context-driven uncertainty settings.
We demonstrate the effectiveness of ScatterUQ to explain model uncertainty for a multiclass image classification on a distance-aware neural network trained on Fashion-MNIST.
Our results indicate that the ScatterUQ system should scale to arbitrary, multiclass datasets.
arXiv Detail & Related papers (2023-08-08T21:17:03Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - Taguchi based Design of Sequential Convolution Neural Network for
Classification of Defective Fasteners [0.08795040582681389]
This study uses Taguchi-based design of experiments and analysis to develop a robust automatic system.
The proposed sequential CNN comes up with a 96.3% validation accuracy, 0.277 validation loss at 0.001 learning rate.
arXiv Detail & Related papers (2022-07-22T10:26:07Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - A Simple Framework to Quantify Different Types of Uncertainty in Deep
Neural Networks for Image Classification [0.0]
Quantifying uncertainty in a model's predictions is important as it enables the safety of an AI system to be increased.
This is crucial for applications where the cost of an error is high, such as in autonomous vehicle control, medical image analysis, financial estimations or legal fields.
We propose a complete framework to capture and quantify three known types of uncertainty in Deep Neural Networks for the task of image classification.
arXiv Detail & Related papers (2020-11-17T15:36:42Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.