Improving generalization with synthetic training data for deep learning
based quality inspection
- URL: http://arxiv.org/abs/2202.12818v1
- Date: Fri, 25 Feb 2022 16:51:01 GMT
- Title: Improving generalization with synthetic training data for deep learning
based quality inspection
- Authors: Antoine Cordier, Pierre Gutierrez, and Victoire Plessis
- Abstract summary: supervised deep learning requires a large amount of annotated images for training.
In practice, collecting and annotating such data is costly and laborious.
We show the use of randomly generated synthetic training images can help tackle domain instability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automating quality inspection with computer vision techniques is often a very
data-demanding task. Specifically, supervised deep learning requires a large
amount of annotated images for training. In practice, collecting and annotating
such data is not only costly and laborious, but also inefficient, given the
fact that only a few instances may be available for certain defect classes. If
working with video frames can increase the number of these instances, it has a
major disadvantage: the resulting images will be highly correlated with one
another. As a consequence, models trained under such constraints are expected
to be very sensitive to input distribution changes, which may be caused in
practice by changes in the acquisition system (cameras, lights), in the parts
or in the defects aspect. In this work, we demonstrate the use of randomly
generated synthetic training images can help tackle domain instability issues,
making the trained models more robust to contextual changes. We detail both our
synthetic data generation pipeline and our deep learning methodology for
answering these questions.
Related papers
- EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training [79.96741042766524]
We reformulate the training curriculum as a soft-selection function.
We show that exposing the contents of natural images can be readily achieved by the intensity of data augmentation.
The resulting method, EfficientTrain++, is simple, general, yet surprisingly effective.
arXiv Detail & Related papers (2024-05-14T17:00:43Z) - One-Shot Image Restoration [0.0]
Experimental results demonstrate the applicability, robustness and computational efficiency of the proposed approach for supervised image deblurring and super-resolution.
Our results showcase significant improvement of learning models' sample efficiency, generalization and time complexity.
arXiv Detail & Related papers (2024-04-26T14:03:23Z) - Premonition: Using Generative Models to Preempt Future Data Changes in
Continual Learning [63.850451635362425]
Continual learning requires a model to adapt to ongoing changes in the data distribution.
We show that the combination of a large language model and an image generation model can similarly provide useful premonitions.
We find that the backbone of our pre-trained networks can learn representations useful for the downstream continual learning problem.
arXiv Detail & Related papers (2024-03-12T06:29:54Z) - Scaling Laws of Synthetic Images for Model Training ... for Now [54.43596959598466]
We study the scaling laws of synthetic images generated by state of the art text-to-image models.
We observe that synthetic images demonstrate a scaling trend similar to, but slightly less effective than, real images in CLIP training.
arXiv Detail & Related papers (2023-12-07T18:59:59Z) - Deep Learning of Crystalline Defects from TEM images: A Solution for the
Problem of "Never Enough Training Data" [0.0]
In-situ TEM experiments can provide important insights into how dislocations behave and move.
The analysis of individual video frames can provide useful insights but is limited by the capabilities of automated identification.
In this work, a parametric model for generating synthetic training data for segmentation of dislocations is developed.
arXiv Detail & Related papers (2023-07-12T17:37:46Z) - Evaluating Data Attribution for Text-to-Image Models [62.844382063780365]
We evaluate attribution through "customization" methods, which tune an existing large-scale model toward a given exemplar object or style.
Our key insight is that this allows us to efficiently create synthetic images that are computationally influenced by the exemplar by construction.
By taking into account the inherent uncertainty of the problem, we can assign soft attribution scores over a set of training images.
arXiv Detail & Related papers (2023-06-15T17:59:51Z) - A Novel Strategy for Improving Robustness in Computer Vision
Manufacturing Defect Detection [1.3198689566654107]
Visual quality inspection in high performance manufacturing can benefit from automation, due to cost savings and improved rigor.
Deep learning techniques are the current state of the art for generic computer vision tasks like classification and object detection.
Manufacturing data can pose a challenge for deep learning because data is highly repetitive and there are few images of defects or deviations to learn from.
arXiv Detail & Related papers (2023-05-16T12:51:51Z) - Continual Learning with Transformers for Image Classification [12.028617058465333]
In computer vision, neural network models struggle to continually learn new concepts without forgetting what has been learnt in the past.
We develop a solution called Adaptive Distillation of Adapters (ADA), which is developed to perform continual learning.
We empirically demonstrate on different classification tasks that this method maintains a good predictive performance without retraining the model.
arXiv Detail & Related papers (2022-06-28T15:30:10Z) - Synthetic training data generation for deep learning based quality
inspection [0.0]
We present a generic simulation pipeline to render images of defective or healthy (non defective) parts.
We assess the quality of the generated images by training deep learning networks and by testing them on real data from a manufacturer.
arXiv Detail & Related papers (2021-04-07T08:07:57Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Laplacian Denoising Autoencoder [114.21219514831343]
We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
arXiv Detail & Related papers (2020-03-30T16:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.