Optimizing YOLOv7 for Semiconductor Defect Detection
- URL: http://arxiv.org/abs/2302.09565v1
- Date: Sun, 19 Feb 2023 12:51:07 GMT
- Title: Optimizing YOLOv7 for Semiconductor Defect Detection
- Authors: Enrique Dehaerne, Bappaditya Dey, Sandip Halder, Stefan De Gendt
- Abstract summary: YOLOv7 is a state-of-the-art object detector based on the YOLO family of models which have become popular for industrial applications.
This research investigates which models improve performance in terms of detection precision for semiconductor line space pattern defects.
- Score: 0.33598755777055367
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The field of object detection using Deep Learning (DL) is constantly evolving
with many new techniques and models being proposed. YOLOv7 is a
state-of-the-art object detector based on the YOLO family of models which have
become popular for industrial applications. One such possible application
domain can be semiconductor defect inspection. The performance of any machine
learning model depends on its hyperparameters. Furthermore, combining
predictions of one or more models in different ways can also affect
performance. In this research, we experiment with YOLOv7, a recently proposed,
state-of-the-art object detector, by training and evaluating models with
different hyperparameters to investigate which ones improve performance in
terms of detection precision for semiconductor line space pattern defects. The
base YOLOv7 model with default hyperparameters and Non Maximum Suppression
(NMS) prediction combining outperforms all RetinaNet models from previous work
in terms of mean Average Precision (mAP). We find that vertically flipping
images randomly during training yields a 3% improvement in the mean AP of all
defect classes. Other hyperparameter values improved AP only for certain
classes compared to the default model. Combining models that achieve the best
AP for different defect classes was found to be an effective ensembling
strategy. Combining predictions from ensembles using Weighted Box Fusion (WBF)
prediction gave the best performance. The best ensemble with WBF improved on
the mAP of the default model by 10%.
Related papers
- Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Quantizing YOLOv7: A Comprehensive Study [0.0]
This paper studies the effectiveness of a variety of quantization schemes on the pre-trained weights of the state-of-the-art YOLOv7 model.
Results show that using 4-bit quantization coupled with the combination of different granularities results in 3.92x and 3.86x memory-saving for uniform and non-uniform quantization.
arXiv Detail & Related papers (2024-07-06T03:23:04Z) - Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Objectives [17.10165955576643]
Current state-of-the-art empirical techniques offer sub-optimal performance on practical, non-decomposable performance objectives.
We propose SelMix, a selective mixup-based inexpensive fine-tuning technique for pre-trained models.
We find that proposed SelMix fine-tuning significantly improves the performance for various practical non-decomposable objectives across benchmarks.
arXiv Detail & Related papers (2024-03-27T06:55:23Z) - Supervised Contrastive Learning based Dual-Mixer Model for Remaining
Useful Life Prediction [3.081898819471624]
The Remaining Useful Life (RUL) prediction aims at providing an accurate estimate of the remaining time from the current predicting moment to the complete failure of the device.
To overcome the shortcomings of rigid combination for temporal and spatial features in most existing RUL prediction approaches, a spatial-temporal homogeneous feature extractor, named Dual-Mixer model, is proposed.
The effectiveness of the proposed method is validated through comparisons with other latest research works on the C-MAPSS dataset.
arXiv Detail & Related papers (2024-01-29T14:38:44Z) - Anomaly Detection via Multi-Scale Contrasted Memory [3.0170109896527086]
We introduce a new two-stage anomaly detector which memorizes during training multi-scale normal prototypes to compute an anomaly deviation score.
Our model highly improves the state-of-the-art performance on a wide range of object, style and local anomalies with up to 35% error relative improvement on CIFAR-10.
arXiv Detail & Related papers (2022-11-16T16:58:04Z) - Model soups: averaging weights of multiple fine-tuned models improves
accuracy without increasing inference time [69.7693300927423]
We show that averaging the weights of multiple models fine-tuned with different hyper parameter configurations improves accuracy and robustness.
We show that the model soup approach extends to multiple image classification and natural language processing tasks.
arXiv Detail & Related papers (2022-03-10T17:03:49Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Mismatched No More: Joint Model-Policy Optimization for Model-Based RL [172.37829823752364]
We propose a single objective for jointly training the model and the policy, such that updates to either component increases a lower bound on expected return.
Our objective is a global lower bound on expected return, and this bound becomes tight under certain assumptions.
The resulting algorithm (MnM) is conceptually similar to a GAN.
arXiv Detail & Related papers (2021-10-06T13:43:27Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Advanced Dropout: A Model-free Methodology for Bayesian Dropout
Optimization [62.8384110757689]
Overfitting ubiquitously exists in real-world applications of deep neural networks (DNNs)
The advanced dropout technique applies a model-free and easily implemented distribution with parametric prior, and adaptively adjusts dropout rate.
We evaluate the effectiveness of the advanced dropout against nine dropout techniques on seven computer vision datasets.
arXiv Detail & Related papers (2020-10-11T13:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.