Developing a Resource-Constraint EdgeAI model for Surface Defect
Detection
- URL: http://arxiv.org/abs/2401.05355v1
- Date: Mon, 4 Dec 2023 15:28:31 GMT
- Title: Developing a Resource-Constraint EdgeAI model for Surface Defect
Detection
- Authors: Atah Nuh Mih, Hung Cao, Asfia Kawnine, Monica Wachowicz
- Abstract summary: We propose a lightweight EdgeAI architecture modified from Xception for on-device training in a resource-constraint edge environment.
We evaluate our model on a PCB defect detection task and compare its performance against existing lightweight models.
Our method can be applied to other resource-constraint applications while maintaining significant performance.
- Score: 1.338174941551702
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Resource constraints have restricted several EdgeAI applications to machine
learning inference approaches, where models are trained on the cloud and
deployed to the edge device. This poses challenges such as bandwidth, latency,
and privacy associated with storing data off-site for model building. Training
on the edge device can overcome these challenges by eliminating the need to
transfer data to another device for storage and model development. On-device
training also provides robustness to data variations as models can be retrained
on newly acquired data to improve performance. We, therefore, propose a
lightweight EdgeAI architecture modified from Xception, for on-device training
in a resource-constraint edge environment. We evaluate our model on a PCB
defect detection task and compare its performance against existing lightweight
models - MobileNetV2, EfficientNetV2B0, and MobileViT-XXS. The results of our
experiment show that our model has a remarkable performance with a test
accuracy of 73.45% without pre-training. This is comparable to the test
accuracy of non-pre-trained MobileViT-XXS (75.40%) and much better than other
non-pre-trained models (MobileNetV2 - 50.05%, EfficientNetV2B0 - 54.30%). The
test accuracy of our model without pre-training is comparable to pre-trained
MobileNetV2 model - 75.45% and better than pre-trained EfficientNetV2B0 model -
58.10%. In terms of memory efficiency, our model performs better than
EfficientNetV2B0 and MobileViT-XXS. We find that the resource efficiency of
machine learning models does not solely depend on the number of parameters but
also depends on architectural considerations. Our method can be applied to
other resource-constraint applications while maintaining significant
performance.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Achieving Pareto Optimality using Efficient Parameter Reduction for DNNs in Resource-Constrained Edge Environment [1.9055921262476347]
This paper proposes an optimization of an existing Deep Neural Network (DNN) that improves its hardware utilization and facilitates on-device training for resource-constrained edge environments.
We implement efficient parameter reduction strategies on Xception that shrink the model size without sacrificing accuracy, thus decreasing memory utilization during training.
arXiv Detail & Related papers (2024-03-14T19:40:58Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Transfer Learning in Deep Learning Models for Building Load Forecasting:
Case of Limited Data [0.0]
This paper proposes a Building-to-Building Transfer Learning framework to overcome the problem and enhance the performance of Deep Learning models.
The proposed approach improved the forecasting accuracy by 56.8% compared to the case of conventional deep learning where training from scratch is used.
arXiv Detail & Related papers (2023-01-25T16:05:47Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Efficient Deep Learning Methods for Identification of Defective Casting
Products [0.0]
In this paper, we have compared and contrasted various pre-trained and custom-built AI architectures.
Our results show that custom architectures are efficient than pre-trained mobile architectures.
Augmentation experimentations have also been carried out on the custom architectures to make the models more robust and generalizable.
arXiv Detail & Related papers (2022-05-14T19:35:05Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z) - Continual Learning at the Edge: Real-Time Training on Smartphone Devices [11.250227901473952]
This paper describes the implementation and deployment of a hybrid learning strategy (AR1*) on a native Android application for real-time on-device personalization without forgetting.
Our benchmark, based on an extension of the CORe50 dataset, shows the efficiency and effectiveness of our solution.
arXiv Detail & Related papers (2021-05-24T12:00:31Z) - DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep
Neural Networks [7.687838702806964]
We propose an approach, dubbed as DeepObliviate, to implement machine unlearning efficiently.
Our approach improves the original training process by storing intermediate models on the hard disk.
Compared to the method of retraining from scratch, our approach can achieve 99.0%, 95.0%, 91.9%, 96.7%, 74.1% accuracy rates and 66.7$times$, 75.0$times$, 33.3$times$, 29.4$times$, 13.7$times$ speedups.
arXiv Detail & Related papers (2021-05-13T12:02:04Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.