Defect Analysis of 3D Printed Cylinder Object Using Transfer Learning
Approaches
- URL: http://arxiv.org/abs/2310.08645v1
- Date: Thu, 12 Oct 2023 18:10:36 GMT
- Title: Defect Analysis of 3D Printed Cylinder Object Using Transfer Learning
Approaches
- Authors: Md Manjurul Ahsan, Shivakumar Raman and Zahed Siddique
- Abstract summary: This study explores the effectiveness of machine learning approaches, specifically transfer learning (TL) models, for defect detection in 3D-printed cylinders.
Images of cylinders were analyzed using models including VGG16, VGG19, ResNet50, ResNet101, InceptionResNetV2, and MobileNetV2.
Results suggest certain TL models can deliver high accuracy for AM defect classification, although performance varies across algorithms.
- Score: 0.51795041186793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Additive manufacturing (AM) is gaining attention across various industries
like healthcare, aerospace, and automotive. However, identifying defects early
in the AM process can reduce production costs and improve productivity - a key
challenge. This study explored the effectiveness of machine learning (ML)
approaches, specifically transfer learning (TL) models, for defect detection in
3D-printed cylinders. Images of cylinders were analyzed using models including
VGG16, VGG19, ResNet50, ResNet101, InceptionResNetV2, and MobileNetV2.
Performance was compared across two datasets using accuracy, precision, recall,
and F1-score metrics. In the first study, VGG16, InceptionResNetV2, and
MobileNetV2 achieved perfect scores. In contrast, ResNet50 had the lowest
performance, with an average F1-score of 0.32. Similarly, in the second study,
MobileNetV2 correctly classified all instances, while ResNet50 struggled with
more false positives and fewer true positives, resulting in an F1-score of
0.75. Overall, the findings suggest certain TL models like MobileNetV2 can
deliver high accuracy for AM defect classification, although performance varies
across algorithms. The results provide insights into model optimization and
integration needs for reliable automated defect analysis during 3D printing. By
identifying the top-performing TL techniques, this study aims to enhance AM
product quality through robust image-based monitoring and inspection.
Related papers
- Fine-Tuning Vision-Language Model for Automated Engineering Drawing Information Extraction [0.0]
Florence-2 is an open-source vision-automated model (VLM)
It is trained on a dataset of 400 drawings with ground truth annotations provided by domain experts.
It achieves a 29.95% increase in precision, a 37.75% increase in recall, a 52.40% improvement in F1-score, and a 43.15% reduction in hallucination rate.
arXiv Detail & Related papers (2024-11-06T07:11:15Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - DiRecNetV2: A Transformer-Enhanced Network for Aerial Disaster Recognition [4.678150356894011]
integration of Unmanned Aerial Vehicles with artificial intelligence (AI) models for aerial imagery processing in disaster assessment requires exceptional accuracy, computational efficiency, and real-time processing capabilities.
Traditionally Convolutional Neural Networks (CNNs) demonstrate efficiency in local feature extraction but are limited by their potential for global context interpretation.
Vision Transformers (ViTs) show promise for improved global context interpretation through the use of attention mechanisms, although they still remain underinvestigated in UAV-based disaster response applications.
arXiv Detail & Related papers (2024-10-17T15:25:13Z) - Accelerating Domain-Aware Electron Microscopy Analysis Using Deep Learning Models with Synthetic Data and Image-Wide Confidence Scoring [0.0]
We create a physics-based synthetic image and data generator, resulting in a machine learning model that achieves comparable precision (0.86), recall (0.63), F1 scores (0.71), and engineering property predictions (R2=0.82)
Our study demonstrates that synthetic data can eliminate human reliance in ML and provides a means for domain awareness in cases where many feature detections per image are needed.
arXiv Detail & Related papers (2024-08-02T20:15:15Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - On Calibration of Modern Quantized Efficient Neural Networks [79.06893963657335]
Quality of calibration is observed to track the quantization quality.
GhostNet-VGG is shown to be the most robust to overall performance drop at lower precision.
arXiv Detail & Related papers (2023-09-25T04:30:18Z) - Benchmarking Deep Learning Frameworks for Automated Diagnosis of Ocular
Toxoplasmosis: A Comprehensive Approach to Classification and Segmentation [1.3701366534590498]
Ocular Toxoplasmosis (OT) is a common eye infection caused by T. gondii that can cause vision problems.
This research seeks to provide a guide for future researchers looking to utilise DL techniques and develop a cheap, automated, easy-to-use, and accurate diagnostic method.
arXiv Detail & Related papers (2023-05-18T13:42:15Z) - Voxel-wise classification for porosity investigation of additive
manufactured parts with 3D unsupervised and (deeply) supervised neural
networks [5.467497693327066]
This study revisits recent supervised (UNet, UNet++, UNet 3+, MSS-UNet) and unsupervised (VAE, ceVAE, gmVAE, vqVAE) DL models for volumetric analysis of AM samples from X-CT images.
It extends them to accept 3D input data with a 3D-patch pipeline for lower computational requirements, improved efficiency and generalisability.
The VAE/ceVAE models demonstrated superior capabilities, particularly when leveraging post-processing techniques.
arXiv Detail & Related papers (2023-05-13T11:23:00Z) - LGD: Label-guided Self-distillation for Object Detection [59.9972914042281]
We propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation)
Our framework involves sparse label-appearance encoding, inter-object relation adaptation and intra-object knowledge mapping to obtain the instructive knowledge.
Compared with a classical teacher-based method FGFI, LGD not only performs better without requiring pretrained teacher but also with 51% lower training cost beyond inherent student learning.
arXiv Detail & Related papers (2021-09-23T16:55:01Z) - When Vision Transformers Outperform ResNets without Pretraining or
Strong Data Augmentations [111.44860506703307]
Vision Transformers (ViTs) and existing VisionNets signal efforts on replacing hand-wired features or inductive throughputs with general-purpose neural architectures.
This paper investigates ViTs and Res-Mixers from the lens of loss geometry, intending to improve the models' data efficiency at training and inference.
We show that the improved robustness attributes to sparser active neurons in the first few layers.
The resultant ViTs outperform Nets of similar size and smoothness when trained from scratch on ImageNet without large-scale pretraining or strong data augmentations.
arXiv Detail & Related papers (2021-06-03T02:08:03Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.