Evaluating the printability of stl files with ML
- URL: http://arxiv.org/abs/2509.12392v1
- Date: Mon, 15 Sep 2025 19:37:00 GMT
- Title: Evaluating the printability of stl files with ML
- Authors: Janik Henn, Adrian Hauptmannl, Hamza A. A. Gardi,
- Abstract summary: Our approach introduces a novel layer of support by training an AI model to detect common issues in 3D models.<n>The goal is to assist less experienced users by identifying features that are likely to cause print failures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D printing has long been a technology for industry professionals and enthusiasts willing to tinker or even build their own machines. This stands in stark contrast to today's market, where recent developments have prioritized ease of use to attract a broader audience. Slicing software nowadays has a few ways to sanity check the input file as well as the output gcode. Our approach introduces a novel layer of support by training an AI model to detect common issues in 3D models. The goal is to assist less experienced users by identifying features that are likely to cause print failures due to difficult to print geometries before printing even begins.
Related papers
- PatchAlign3D: Local Feature Alignment for Dense 3D Shape understanding [67.15800065888887]
Current foundation models for 3D shapes excel at global tasks (retrieval, classification) but transfer poorly to local part-level reasoning.<n>We introduce an encoder-only 3D model that produces language-aligned patch-level features directly from point clouds.<n>Our 3D encoder achieves zero-shot 3D part segmentation with fast single-pass inference without any test-time multi-view rendering.
arXiv Detail & Related papers (2026-01-05T18:55:45Z) - SpaceControl: Introducing Test-Time Spatial Control to 3D Generative Modeling [62.89824987879374]
We introduce SpaceControl, a training-free test-time method for explicit spatial control of 3D generation.<n>SpaceControl integrates seamlessly with modern pre-trained generative models without requiring any additional training.<n>We present an interactive user interface that enables online editing of superquadrics for direct conversion into textured 3D assets.
arXiv Detail & Related papers (2025-12-05T00:54:48Z) - Turning Hearsay into Discovery: Industrial 3D Printer Side Channel Information Translated to Stealing the Object Design [46.740145853674875]
We show for the first time that side-channel attacks are a serious threat to industrial grade 3D printers.<n>We reconstruct the 3D printed model solely from the collected power side-channel data.
arXiv Detail & Related papers (2025-09-22T19:46:21Z) - 3DGen-Bench: Comprehensive Benchmark Suite for 3D Generative Models [94.48803082248872]
3D generation is experiencing rapid advancements, while the development of 3D evaluation has not kept pace.<n>We develop 3DGen-Arena, an integrated platform to gather human preferences from both public users and expert annotators.<n>Using this dataset, we further train a CLIP-based scoring model, 3DGen-Score, and a MLLM-based automatic evaluator, 3DGen-Eval.
arXiv Detail & Related papers (2025-03-27T17:53:00Z) - ZeroKey: Point-Level Reasoning and Zero-Shot 3D Keypoint Detection from Large Language Models [57.57832348655715]
We propose a novel zero-shot approach for keypoint detection on 3D shapes.<n>Our method utilizes the rich knowledge embedded within Multi-Modal Large Language Models.
arXiv Detail & Related papers (2024-12-09T08:31:57Z) - Practitioner Paper: Decoding Intellectual Property: Acoustic and Magnetic Side-channel Attack on a 3D Printer [3.0832643041058607]
This work demonstrates the feasibility of reconstructing G-codes by performing side-channel attacks on a 3D printer.
By training models using Gradient Boosted Decision Trees, our prediction results for each axial movement, stepper, nozzle, and rotor speed achieve high accuracy.
We effectively deploy the model in a real-world examination, achieving a Mean Tendency Error (MTE) of 4.47% on a plain G-code design.
arXiv Detail & Related papers (2024-11-16T21:05:25Z) - LLM-3D Print: Large Language Models To Monitor and Control 3D Printing [6.349503549199403]
Industry 4.0 has revolutionized manufacturing by driving digitalization and shifting the paradigm toward additive manufacturing (AM)<n>FDM, a key AM technology, enables the creation of highly customized, cost-effective products with minimal material waste through layer-by-layer extrusion.<n>We present a process monitoring and control framework that leverages pre-trained Large Language Models (LLMs) alongside 3D printers to detect and address printing defects.
arXiv Detail & Related papers (2024-08-26T14:38:19Z) - Contrastive Attention Networks for Attribution of Early Modern Print [23.344655278038392]
We develop machine learning techniques to identify unknown printers in early modern (c.1500--1800) English printed books.
Specifically, we focus on matching uniquely damaged character type-imprints in anonymously printed books to works with known printers.
arXiv Detail & Related papers (2023-06-12T19:57:11Z) - Using Large Language Models to Generate Engaging Captions for Data
Visualizations [51.98253121636079]
Large language models (LLM) use sophisticated deep learning technology to produce human-like prose.
Key challenge lies in designing the most effective prompt for the LLM, a task called prompt engineering.
We report on first experiments using the popular LLM GPT-3 and deliver some promising results.
arXiv Detail & Related papers (2022-12-27T23:56:57Z) - Semi-Siamese Network for Robust Change Detection Across Different
Domains with Applications to 3D Printing [17.176767333354636]
We present a novel Semi-Siamese deep learning model for defect detection in 3D printing processes.
Our model is designed to enable comparison of heterogeneous images from different domains while being robust against perturbations in the imaging setup.
Using our model, defect localization predictions can be made in less than half a second per layer using a standard MacBook Pro while achieving an F1-score of more than 0.9.
arXiv Detail & Related papers (2022-12-16T17:02:55Z) - Augraphy: A Data Augmentation Library for Document Images [59.457999432618614]
Augraphy is a Python library for constructing data augmentation pipelines.
It provides strategies to produce augmented versions of clean document images that appear to have been altered by standard office operations.
arXiv Detail & Related papers (2022-08-30T22:36:19Z) - Self-Supervised Point Cloud Representation Learning with Occlusion
Auto-Encoder [63.77257588569852]
We present 3D Occlusion Auto-Encoder (3D-OAE) for learning representations for point clouds.
Our key idea is to randomly occlude some local patches of the input point cloud and establish the supervision via recovering the occluded patches.
In contrast with previous methods, our 3D-OAE can remove a large proportion of patches and predict them only with a small number of visible patches.
arXiv Detail & Related papers (2022-03-26T14:06:29Z) - 3D-EDM: Early Detection Model for 3D-Printer Faults [0.0]
It is difficult to use a 3D printer with accurate calibration.
Previous studies have suggested that these problems can be detected using sensor data and image data with machine learning methods.
Considering actual use in the future, we focus on generating the lightweight early detection model with easily collectable data.
arXiv Detail & Related papers (2022-03-23T02:46:26Z) - Word Shape Matters: Robust Machine Translation with Visual Embedding [78.96234298075389]
We introduce a new encoding of the input symbols for character-level NLP models.
It encodes the shape of each character through the images depicting the letters when printed.
We name this new strategy visual embedding and it is expected to improve the robustness of NLP models.
arXiv Detail & Related papers (2020-10-20T04:08:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.