LLM-3D Print: Large Language Models To Monitor and Control 3D Printing
- URL: http://arxiv.org/abs/2408.14307v1
- Date: Mon, 26 Aug 2024 14:38:19 GMT
- Title: LLM-3D Print: Large Language Models To Monitor and Control 3D Printing
- Authors: Yayati Jadhav, Peter Pak, Amir Barati Farimani,
- Abstract summary: Industry 4.0 has revolutionized manufacturing by driving digitalization and shifting the paradigm toward additive manufacturing (AM)
FDM, a key AM technology, enables the creation of highly customized, cost-effective products with minimal material waste through layer-by-layer extrusion.
We present a process monitoring and control framework that leverages pre-trained Large Language Models (LLMs) alongside 3D printers to detect and address printing defects.
- Score: 6.349503549199403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Industry 4.0 has revolutionized manufacturing by driving digitalization and shifting the paradigm toward additive manufacturing (AM). Fused Deposition Modeling (FDM), a key AM technology, enables the creation of highly customized, cost-effective products with minimal material waste through layer-by-layer extrusion, posing a significant challenge to traditional subtractive methods. However, the susceptibility of material extrusion techniques to errors often requires expert intervention to detect and mitigate defects that can severely compromise product quality. While automated error detection and machine learning models exist, their generalizability across diverse 3D printer setups, firmware, and sensors is limited, and deep learning methods require extensive labeled datasets, hindering scalability and adaptability. To address these challenges, we present a process monitoring and control framework that leverages pre-trained Large Language Models (LLMs) alongside 3D printers to detect and address printing defects. The LLM evaluates print quality by analyzing images captured after each layer or print segment, identifying failure modes and querying the printer for relevant parameters. It then generates and executes a corrective action plan. We validated the effectiveness of the proposed framework in identifying defects by comparing it against a control group of engineers with diverse AM expertise. Our evaluation demonstrated that LLM-based agents not only accurately identify common 3D printing errors, such as inconsistent extrusion, stringing, warping, and layer adhesion, but also effectively determine the parameters causing these failures and autonomously correct them without any need for human intervention.
Related papers
- Investigation on domain adaptation of additive manufacturing monitoring systems to enhance digital twin reusability [12.425166883814153]
Digital twin (DT) using machine learning (ML)-based modeling can be deployed for AM process monitoring and control.
Melt pool is one of the most commonly observed physical phenomena for process monitoring.
This paper proposes a knowledge transfer pipeline between different AM settings to enhance the reusability of AM DTs.
arXiv Detail & Related papers (2024-09-19T13:54:01Z) - AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models [95.09157454599605]
Large Language Models (LLMs) are becoming increasingly powerful, but they still exhibit significant but subtle weaknesses.
Traditional benchmarking approaches cannot thoroughly pinpoint specific model deficiencies.
We introduce a unified framework, AutoDetect, to automatically expose weaknesses in LLMs across various tasks.
arXiv Detail & Related papers (2024-06-24T15:16:45Z) - 3D Face Modeling via Weakly-supervised Disentanglement Network joint Identity-consistency Prior [62.80458034704989]
Generative 3D face models featuring disentangled controlling factors hold immense potential for diverse applications in computer vision and computer graphics.
Previous 3D face modeling methods face a challenge as they demand specific labels to effectively disentangle these factors.
This paper introduces a Weakly-Supervised Disentanglement Framework, denoted as WSDF, to facilitate the training of controllable 3D face models without an overly stringent labeling requirement.
arXiv Detail & Related papers (2024-04-25T11:50:47Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - An unsupervised approach towards promptable defect segmentation in laser-based additive manufacturing by Segment Anything [7.188573079798082]
We construct a framework for image segmentation using a state-of-the-art Vision Transformer (ViT) based Foundation model.
We obtain high accuracy without using any labeled data to guide the prompt tuning process.
We envision constructing a real-time anomaly detection pipeline that could revolutionize current laser additive manufacturing processes.
arXiv Detail & Related papers (2023-12-07T06:03:07Z) - IT3D: Improved Text-to-3D Generation with Explicit View Synthesis [71.68595192524843]
This study presents a novel strategy that leverages explicitly synthesized multi-view images to address these issues.
Our approach involves the utilization of image-to-image pipelines, empowered by LDMs, to generate posed high-quality images.
For the incorporated discriminator, the synthesized multi-view images are considered real data, while the renderings of the optimized 3D models function as fake data.
arXiv Detail & Related papers (2023-08-22T14:39:17Z) - Distributional Instance Segmentation: Modeling Uncertainty and High
Confidence Predictions with Latent-MaskRCNN [77.0623472106488]
In this paper, we explore a class of distributional instance segmentation models using latent codes.
For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary.
We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes.
arXiv Detail & Related papers (2023-05-03T05:57:29Z) - Semi-Siamese Network for Robust Change Detection Across Different
Domains with Applications to 3D Printing [17.176767333354636]
We present a novel Semi-Siamese deep learning model for defect detection in 3D printing processes.
Our model is designed to enable comparison of heterogeneous images from different domains while being robust against perturbations in the imaging setup.
Using our model, defect localization predictions can be made in less than half a second per layer using a standard MacBook Pro while achieving an F1-score of more than 0.9.
arXiv Detail & Related papers (2022-12-16T17:02:55Z) - An adaptive human-in-the-loop approach to emission detection of Additive
Manufacturing processes and active learning with computer vision [76.72662577101988]
In-situ monitoring and process control in Additive Manufacturing (AM) allows the collection of large amounts of emission data.
This data can be used as input into 3D and 2D representations of the 3D-printed parts.
The aim of this paper is to propose an adaptive human-in-the-loop approach using Machine Learning techniques.
arXiv Detail & Related papers (2022-12-12T15:11:18Z) - See Eye to Eye: A Lidar-Agnostic 3D Detection Framework for Unsupervised
Multi-Target Domain Adaptation [7.489722641968593]
We propose an unsupervised multi-target domain adaptation framework, SEE, for transferring the performance of state-of-the-art 3D detectors across lidars.
Our approach interpolates the underlying geometry and normalizes the scan pattern of objects from different lidars before passing them to the detection network.
We demonstrate the effectiveness of SEE on public datasets, achieving state-of-the-art results, and additionally provide quantitative results on a novel high-resolution lidar to prove the industry applications of our framework.
arXiv Detail & Related papers (2021-11-17T23:46:47Z) - Towards Smart Monitored AM: Open Source in-Situ Layer-wise 3D Printing
Image Anomaly Detection Using Histograms of Oriented Gradients and a
Physics-Based Rendering Engine [0.0]
This study presents an open source method for detecting 3D printing anomalies by comparing images of printed layers from a stationary monocular camera with G-code-based reference images of an ideal process generated with Blender, a physics rendering engine.
Recognition of visual deviations was accomplished by analyzing the similarity of histograms of oriented gradients (HOG) of local image areas.
The implementation of this novel method does not require preliminary data for training, and the greatest efficiency can be achieved with the mass production of parts by either additive or subtractive manufacturing of the same geometric shape.
arXiv Detail & Related papers (2021-11-04T09:27:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.