Pseudo Replay-based Class Continual Learning for Online New Category
Anomaly Detection in Additive Manufacturing
- URL: http://arxiv.org/abs/2312.02491v1
- Date: Tue, 5 Dec 2023 04:43:23 GMT
- Title: Pseudo Replay-based Class Continual Learning for Online New Category
Anomaly Detection in Additive Manufacturing
- Authors: Zhangyue Shi, Tianxin Xie, Chenang Liu, Yuxuan Li
- Abstract summary: This paper develops a novel pseudo replay-based continual learning by integrating class incremental learning and oversampling-based data generation.
The effectiveness of the proposed framework is validated in an additive manufacturing process.
- Score: 5.4754728413969405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The incorporation of advanced sensors and machine learning techniques has
enabled modern manufacturing enterprises to perform data-driven in-situ quality
monitoring based on the sensor data collected in manufacturing processes.
However, one critical challenge is that newly presented defect category may
manifest as the manufacturing process continues, resulting in monitoring
performance deterioration of previously trained machine learning models. Hence,
there is an increasing need for empowering machine learning model to learn
continually. Among all continual learning methods, memory-based continual
learning has the best performance but faces the constraints of data storage
capacity. To address this issue, this paper develops a novel pseudo
replay-based continual learning by integrating class incremental learning and
oversampling-based data generation. Without storing all the data, the developed
framework could generate high-quality data representing previous classes to
train machine learning model incrementally when new category anomaly occurs. In
addition, it could even enhance the monitoring performance since it also
effectively improves the data quality. The effectiveness of the proposed
framework is validated in an additive manufacturing process, which leverages
supervised classification problem for anomaly detection. The experimental
results show that the developed method is very promising in detecting novel
anomaly while maintaining a good performance on the previous task and brings up
more flexibility in model architecture.
Related papers
- Controlling Forgetting with Test-Time Data in Continual Learning [15.455400390299593]
Ongoing Continual Learning research provides techniques to overcome catastrophic forgetting of previous information when new knowledge is acquired.
We argue that test-time data hold great information that can be leveraged in a self supervised manner to refresh the model's memory of previous learned tasks.
arXiv Detail & Related papers (2024-06-19T15:56:21Z) - Dataset Condensation Driven Machine Unlearning [0.0]
Current trend in data regulation requirements and privacy-preserving machine learning has emphasized the importance of machine unlearning.
We propose new dataset condensation techniques and an innovative unlearning scheme that strikes a balance between machine unlearning privacy, utility, and efficiency.
We present a novel and effective approach to instrumenting machine unlearning and propose its application in defending against membership inference and model inversion attacks.
arXiv Detail & Related papers (2024-01-31T21:48:25Z) - Continual Learning of Diffusion Models with Generative Distillation [34.52513912701778]
Diffusion models are powerful generative models that achieve state-of-the-art performance in image synthesis.
In this paper, we propose generative distillation, an approach that distils the entire reverse process of a diffusion model.
arXiv Detail & Related papers (2023-11-23T14:33:03Z) - Generative Adversarial Networks Unlearning [13.342749941357152]
Machine unlearning has emerged as a solution to erase training data from trained machine learning models.
Research on Generative Adversarial Networks (GANs) is limited due to their unique architecture, including a generator and a discriminator.
We propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.
arXiv Detail & Related papers (2023-08-19T02:21:21Z) - Defect Classification in Additive Manufacturing Using CNN-Based Vision
Processing [76.72662577101988]
This paper examines two scenarios: first, using convolutional neural networks (CNNs) to accurately classify defects in an image dataset from AM and second, applying active learning techniques to the developed classification model.
This allows the construction of a human-in-the-loop mechanism to reduce the size of the data required to train and generate training data.
arXiv Detail & Related papers (2023-07-14T14:36:58Z) - A Novel Strategy for Improving Robustness in Computer Vision
Manufacturing Defect Detection [1.3198689566654107]
Visual quality inspection in high performance manufacturing can benefit from automation, due to cost savings and improved rigor.
Deep learning techniques are the current state of the art for generic computer vision tasks like classification and object detection.
Manufacturing data can pose a challenge for deep learning because data is highly repetitive and there are few images of defects or deviations to learn from.
arXiv Detail & Related papers (2023-05-16T12:51:51Z) - An Adversarial Active Sampling-based Data Augmentation Framework for
Manufacturable Chip Design [55.62660894625669]
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable.
Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks.
We propose a litho-aware data augmentation framework to resolve the dilemma of limited data and improve the machine learning model performance.
arXiv Detail & Related papers (2022-10-27T20:53:39Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.