Active Learning and Approximate Model Calibration for Automated Visual
Inspection in Manufacturing
- URL: http://arxiv.org/abs/2209.05486v1
- Date: Mon, 12 Sep 2022 15:00:29 GMT
- Title: Active Learning and Approximate Model Calibration for Automated Visual
Inspection in Manufacturing
- Authors: Jo\v{z}e M. Ro\v{z}anec, Luka Bizjak, Elena Trajkova, Patrik Zajec,
Jelle Keizer, Bla\v{z} Fortuna, Dunja Mladeni\'c
- Abstract summary: This research compares three active learning approaches (with single and multiple oracles) to visual inspection.
We propose a novel approach to probabilities calibration of classification models and two new metrics to assess the performance of the calibration without the need for ground truth.
- Score: 0.415623340386296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quality control is a crucial activity performed by manufacturing enterprises
to ensure that their products meet quality standards and avoid potential damage
to the brand's reputation. The decreased cost of sensors and connectivity
enabled increasing digitalization of manufacturing. In addition, artificial
intelligence enables higher degrees of automation, reducing overall costs and
time required for defect inspection. This research compares three active
learning approaches (with single and multiple oracles) to visual inspection. We
propose a novel approach to probabilities calibration of classification models
and two new metrics to assess the performance of the calibration without the
need for ground truth. We performed experiments on real-world data provided by
Philips Consumer Lifestyle BV. Our results show that explored active learning
settings can reduce the data labeling effort by between three and four percent
without detriment to the overall quality goals, considering a threshold of
p=0.95. Furthermore, we show that the proposed metrics successfully capture
relevant information otherwise available to metrics used up to date only
through ground truth data. Therefore, the proposed metrics can be used to
estimate the quality of models' probability calibration without committing to a
labeling effort to obtain ground truth data.
Related papers
- Learning to Solve and Verify: A Self-Play Framework for Code and Test Generation [69.62857948698436]
Recent advances in large language models (LLMs) have improved their performance on coding benchmarks.
However, improvement is plateauing due to the exhaustion of readily available high-quality data.
We propose Sol-Ver, a self-play solver-verifier framework that jointly improves a single model's code and test generation capacity.
arXiv Detail & Related papers (2025-02-20T18:32:19Z) - What Really Matters for Learning-based LiDAR-Camera Calibration [50.2608502974106]
This paper revisits the development of learning-based LiDAR-Camera calibration.
We identify the critical limitations of regression-based methods with the widely used data generation pipeline.
We also investigate how the input data format and preprocessing operations impact network performance.
arXiv Detail & Related papers (2025-01-28T14:12:32Z) - Balancing Label Quantity and Quality for Scalable Elicitation [2.2143065226946423]
We study the microeconomics of the quantity-quality tradeoff on binary NLP classification tasks.
We observe three regimes of eliciting classification knowledge from pretrained models using supervised finetuning.
We find that the accuracy of supervised fine-tuning can be improved by up to 5 percentage points at a fixed labeling budget.
arXiv Detail & Related papers (2024-10-17T04:39:58Z) - Fill In The Gaps: Model Calibration and Generalization with Synthetic Data [2.89287673224661]
We propose a calibration method that incorporates synthetic data without compromising accuracy.
We derive the expected calibration error (ECE) bound using the Probably Approximately Correct (PAC) learning framework.
We observed an average up to 34% increase in accuracy and 33% decrease in ECE.
arXiv Detail & Related papers (2024-10-07T23:06:42Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Learning for Transductive Threshold Calibration in Open-World Recognition [83.35320675679122]
We introduce OpenGCN, a Graph Neural Network-based transductive threshold calibration method with enhanced robustness and adaptability.
Experiments across open-world visual recognition benchmarks validate OpenGCN's superiority over existing posthoc calibration methods for open-world threshold calibration.
arXiv Detail & Related papers (2023-05-19T23:52:48Z) - Synthetic Data Augmentation Using GAN For Improved Automated Visual
Inspection [0.440401067183266]
State-of-the-art unsupervised defect detection does not match the performance of supervised models.
Best classification performance was achieved considering GAN-based data generation with AUC ROC scores equal to or higher than 0,9898.
arXiv Detail & Related papers (2022-12-19T09:31:15Z) - ZeroGen$^+$: Self-Guided High-Quality Data Generation in Efficient
Zero-Shot Learning [97.2907428983142]
ZeroGen attempts to purely use PLM to generate data and train a tiny model without relying on task-specific annotation.
We propose a noise-robust bi-level re-weighting framework which is able to learn the per-sample weights measuring the data quality without requiring any gold data.
arXiv Detail & Related papers (2022-05-25T11:38:48Z) - Streaming Machine Learning and Online Active Learning for Automated
Visual Inspection [0.6299766708197884]
We compare five streaming machine learning algorithms applied to visual defect inspection with real-world data provided by Philips Consumer Lifestyle BV.
Our results show that active learning reduces the data labeling effort by almost 15% on average for the worst case.
The use of machine learning models for automated visual inspection are expected to speed up the quality inspection up to 40%.
arXiv Detail & Related papers (2021-10-15T09:39:04Z) - Active Learning for Automated Visual Inspection of Manufactured Products [0.6299766708197884]
We compare three active learning approaches and five machine learning algorithms applied to visual defect inspection with real-world data.
Our results show that active learning reduces the data labeling effort without detriment to the models' performance.
arXiv Detail & Related papers (2021-09-06T13:44:25Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.