Defect Analysis of 3D Printed Cylinder Object Using Transfer Learning
Approaches
- URL: http://arxiv.org/abs/2310.08645v1
- Date: Thu, 12 Oct 2023 18:10:36 GMT
- Title: Defect Analysis of 3D Printed Cylinder Object Using Transfer Learning
Approaches
- Authors: Md Manjurul Ahsan, Shivakumar Raman and Zahed Siddique
- Abstract summary: This study explores the effectiveness of machine learning approaches, specifically transfer learning (TL) models, for defect detection in 3D-printed cylinders.
Images of cylinders were analyzed using models including VGG16, VGG19, ResNet50, ResNet101, InceptionResNetV2, and MobileNetV2.
Results suggest certain TL models can deliver high accuracy for AM defect classification, although performance varies across algorithms.
- Score: 0.51795041186793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Additive manufacturing (AM) is gaining attention across various industries
like healthcare, aerospace, and automotive. However, identifying defects early
in the AM process can reduce production costs and improve productivity - a key
challenge. This study explored the effectiveness of machine learning (ML)
approaches, specifically transfer learning (TL) models, for defect detection in
3D-printed cylinders. Images of cylinders were analyzed using models including
VGG16, VGG19, ResNet50, ResNet101, InceptionResNetV2, and MobileNetV2.
Performance was compared across two datasets using accuracy, precision, recall,
and F1-score metrics. In the first study, VGG16, InceptionResNetV2, and
MobileNetV2 achieved perfect scores. In contrast, ResNet50 had the lowest
performance, with an average F1-score of 0.32. Similarly, in the second study,
MobileNetV2 correctly classified all instances, while ResNet50 struggled with
more false positives and fewer true positives, resulting in an F1-score of
0.75. Overall, the findings suggest certain TL models like MobileNetV2 can
deliver high accuracy for AM defect classification, although performance varies
across algorithms. The results provide insights into model optimization and
integration needs for reliable automated defect analysis during 3D printing. By
identifying the top-performing TL techniques, this study aims to enhance AM
product quality through robust image-based monitoring and inspection.
Related papers
- Improving ICD coding using Chapter based Named Entities and Attentional Models [0.0]
We introduce an enhanced approach to ICD coding that improves F1 scores by using chapter-based named entities and attentional models.
This method categorizes discharge summaries into ICD-9 Chapters and develops attentional models with chapter-specific data.
For categorization, we use Chapter-IV to de-bias and influence key entities and weights without neural networks.
arXiv Detail & Related papers (2024-07-24T12:34:23Z) - Depth Estimation using Weighted-loss and Transfer Learning [2.428301619698667]
We propose a simplified and adaptable approach to improve depth estimation accuracy using transfer learning and an optimized loss function.
In this study, we propose a simplified and adaptable approach to improve depth estimation accuracy using transfer learning and an optimized loss function.
The results indicate significant improvements in accuracy and robustness, with EfficientNet being the most successful architecture.
arXiv Detail & Related papers (2024-04-11T12:25:54Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - On Calibration of Modern Quantized Efficient Neural Networks [79.06893963657335]
Quality of calibration is observed to track the quantization quality.
GhostNet-VGG is shown to be the most robust to overall performance drop at lower precision.
arXiv Detail & Related papers (2023-09-25T04:30:18Z) - Benchmarking Deep Learning Frameworks for Automated Diagnosis of Ocular
Toxoplasmosis: A Comprehensive Approach to Classification and Segmentation [1.3701366534590498]
Ocular Toxoplasmosis (OT) is a common eye infection caused by T. gondii that can cause vision problems.
This research seeks to provide a guide for future researchers looking to utilise DL techniques and develop a cheap, automated, easy-to-use, and accurate diagnostic method.
arXiv Detail & Related papers (2023-05-18T13:42:15Z) - Voxel-wise classification for porosity investigation of additive
manufactured parts with 3D unsupervised and (deeply) supervised neural
networks [5.467497693327066]
This study revisits recent supervised (UNet, UNet++, UNet 3+, MSS-UNet) and unsupervised (VAE, ceVAE, gmVAE, vqVAE) DL models for volumetric analysis of AM samples from X-CT images.
It extends them to accept 3D input data with a 3D-patch pipeline for lower computational requirements, improved efficiency and generalisability.
The VAE/ceVAE models demonstrated superior capabilities, particularly when leveraging post-processing techniques.
arXiv Detail & Related papers (2023-05-13T11:23:00Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - LGD: Label-guided Self-distillation for Object Detection [59.9972914042281]
We propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation)
Our framework involves sparse label-appearance encoding, inter-object relation adaptation and intra-object knowledge mapping to obtain the instructive knowledge.
Compared with a classical teacher-based method FGFI, LGD not only performs better without requiring pretrained teacher but also with 51% lower training cost beyond inherent student learning.
arXiv Detail & Related papers (2021-09-23T16:55:01Z) - When Vision Transformers Outperform ResNets without Pretraining or
Strong Data Augmentations [111.44860506703307]
Vision Transformers (ViTs) and existing VisionNets signal efforts on replacing hand-wired features or inductive throughputs with general-purpose neural architectures.
This paper investigates ViTs and Res-Mixers from the lens of loss geometry, intending to improve the models' data efficiency at training and inference.
We show that the improved robustness attributes to sparser active neurons in the first few layers.
The resultant ViTs outperform Nets of similar size and smoothness when trained from scratch on ImageNet without large-scale pretraining or strong data augmentations.
arXiv Detail & Related papers (2021-06-03T02:08:03Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z) - Compounding the Performance Improvements of Assembled Techniques in a
Convolutional Neural Network [6.938261599173859]
We show how to improve the accuracy and robustness of basic CNN models.
Our proposed assembled ResNet-50 shows improvements in top-1 accuracy from 76.3% to 82.78%, mCE from 76.0% to 48.9% and mFR from 57.7% to 32.3%.
Our approach achieved 1st place in the iFood Competition Fine-Grained Visual Recognition at CVPR 2019.
arXiv Detail & Related papers (2020-01-17T12:42:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.