Active learning using weakly supervised signals for quality inspection
- URL: http://arxiv.org/abs/2104.02973v1
- Date: Wed, 7 Apr 2021 07:49:07 GMT
- Title: Active learning using weakly supervised signals for quality inspection
- Authors: Antoine Cordier, Deepan Das, and Pierre Gutierrez
- Abstract summary: We develop a methodology for learning actively, from rapidly mined, weakly annotated data.
We tackle a big machine vision weakness: false positives.
In that regard, we show domain-adversarial training to be an efficient way to address this issue.
- Score: 0.16683739531034203
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Because manufacturing processes evolve fast, and since production visual
aspect can vary significantly on a daily basis, the ability to rapidly update
machine vision based inspection systems is paramount. Unfortunately, supervised
learning of convolutional neural networks requires a significant amount of
annotated images for being able to learn effectively from new data.
Acknowledging the abundance of continuously generated images coming from the
production line and the cost of their annotation, we demonstrate it is possible
to prioritize and accelerate the annotation process. In this work, we develop a
methodology for learning actively, from rapidly mined, weakly (i.e. partially)
annotated data, enabling a fast, direct feedback from the operators on the
production line and tackling a big machine vision weakness: false positives. We
also consider the problem of covariate shift, which arises inevitably due to
changing conditions during data acquisition. In that regard, we show
domain-adversarial training to be an efficient way to address this issue.
Related papers
- Efficient entity-based reinforcement learning [3.867363075280544]
We propose to combine recent advances in set representations with slot attention and graph neural networks to process structured data.
We show that it can improve training time and robustness significantly, and demonstrate their potential to handle structured as well as purely visual domains.
arXiv Detail & Related papers (2022-06-06T19:02:39Z) - Improving generalization with synthetic training data for deep learning
based quality inspection [0.0]
supervised deep learning requires a large amount of annotated images for training.
In practice, collecting and annotating such data is costly and laborious.
We show the use of randomly generated synthetic training images can help tackle domain instability.
arXiv Detail & Related papers (2022-02-25T16:51:01Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Crop-Transform-Paste: Self-Supervised Learning for Visual Tracking [137.26381337333552]
In this work, we develop the Crop-Transform-Paste operation, which is able to synthesize sufficient training data.
Since the object state is known in all synthesized data, existing deep trackers can be trained in routine ways without human annotation.
arXiv Detail & Related papers (2021-06-21T07:40:34Z) - Data-efficient Weakly-supervised Learning for On-line Object Detection
under Domain Shift in Robotics [24.878465999976594]
Several object detection methods have been proposed in the literature, the vast majority based on Deep Convolutional Neural Networks (DCNNs)
These methods have important limitations for robotics: Learning solely on off-line data may introduce biases, and prevents adaptation to novel tasks.
In this work, we investigate how weakly-supervised learning can cope with these problems.
arXiv Detail & Related papers (2020-12-28T16:36:11Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Auto-Rectify Network for Unsupervised Indoor Depth Estimation [119.82412041164372]
We establish that the complex ego-motions exhibited in handheld settings are a critical obstacle for learning depth.
We propose a data pre-processing method that rectifies training images by removing their relative rotations for effective learning.
Our results outperform the previous unsupervised SOTA method by a large margin on the challenging NYUv2 dataset.
arXiv Detail & Related papers (2020-06-04T08:59:17Z) - Any-Shot Sequential Anomaly Detection in Surveillance Videos [36.24563211765782]
We propose an online anomaly detection method for surveillance videos using transfer learning and any-shot learning.
Our proposed algorithm leverages the feature extraction power of neural network-based models for transfer learning and the any-shot learning capability of statistical detection methods.
arXiv Detail & Related papers (2020-04-05T02:15:45Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Self-supervised visual feature learning with curriculum [0.24366811507669126]
This paper takes inspiration from curriculum learning to progressively remove low level signals.
It shows that it significantly increase the speed of convergence of the downstream task.
arXiv Detail & Related papers (2020-01-16T03:28:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.