Turning Hearsay into Discovery: Industrial 3D Printer Side Channel Information Translated to Stealing the Object Design
- URL: http://arxiv.org/abs/2509.18366v1
- Date: Mon, 22 Sep 2025 19:46:21 GMT
- Title: Turning Hearsay into Discovery: Industrial 3D Printer Side Channel Information Translated to Stealing the Object Design
- Authors: Aleksandr Dolgavin, Jacob Gatlin, Moti Yung, Mark Yampolskiy,
- Abstract summary: We show for the first time that side-channel attacks are a serious threat to industrial grade 3D printers.<n>We reconstruct the 3D printed model solely from the collected power side-channel data.
- Score: 46.740145853674875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The central security issue of outsourced 3D printing (aka AM: Additive Manufacturing), an industry that is expected to dominate manufacturing, is the protection of the digital design (containing the designers' model, which is their intellectual property) shared with the manufacturer. Here, we show, for the first time, that side-channel attacks are, in fact, a concrete serious threat to existing industrial grade 3D printers, enabling the reconstruction of the model printed (regardless of employing ways to directly conceal the design, e.g. by encrypting it in transit and before loading it into the printer). Previously, such attacks were demonstrated only on fairly simple FDM desktop 3D printers, which play a negligible role in manufacturing of valuable designs. We focus on the Powder Bed Fusion (PBF) AM process, which is popular for manufacturing net-shaped parts with both polymers and metals. We demonstrate how its individual actuators can be instrumented for the collection of power side-channel information during the printing process. We then present our approach to reconstruct the 3D printed model solely from the collected power side-channel data. Further, inspired by Differential Power Analysis, we developed a method to improve the quality of the reconstruction based on multiple traces. We tested our approach on two design models with different degrees of complexity. For different models, we achieved as high as 90.29~\% of True Positives and as low as 7.02~\% and 9.71~\% of False Positives and False Negatives by voxel-based volumetric comparison between reconstructed and original designs. The lesson learned from our attack is that the security of design files cannot solely rely on protecting the files themselves in an industrial environment, but must instead also rely on assuring no leakage of power, noise and similar signals to potential eavesdroppers in the printer's vicinity.
Related papers
- QuietPrint: Protecting 3D Printers Against Acoustic Side-Channel Attacks [0.5729426778193398]
Cyber-attacks targeting the 3D printing process are becoming increasingly common.<n>One major concern is intellectual property (IP) theft, where a malicious attacker gains access to the design file.<n>In this work, we investigate the possibility of IP theft via acoustic side channels and propose a novel method to protect 3D printers.
arXiv Detail & Related papers (2026-02-02T15:04:00Z) - Lossless Copyright Protection via Intrinsic Model Fingerprinting [21.898748690761874]
Existing protection methods modify the model to embed watermarks, which impairs performance.<n>We propose TrajPrint, a completely lossless and training-free framework that verifies model copyright by extracting unique manifold fingerprints.
arXiv Detail & Related papers (2026-01-29T04:18:07Z) - Adapter Shield: A Unified Framework with Built-in Authentication for Preventing Unauthorized Zero-Shot Image-to-Image Generation [74.5813283875938]
Zero-shot image-to-image generation poses substantial risks related to intellectual property violations.<n>This work presents Adapter Shield, the first universal and authentication-integrated solution aimed at defending personal images from misuse.<n>Our method surpasses existing state-of-the-art defenses in blocking unauthorized zero-shot image synthesis.
arXiv Detail & Related papers (2025-11-25T04:49:16Z) - AuthPrint: Fingerprinting Generative Models Against Malicious Model Providers [8.37060553485295]
Methods rely on a trusted verifier that extracts secret fingerprints from the model's output space, unknown to the provider, and trains a model to predict and verify them.<n>Our empirical evaluation shows that our methods achieve near-zero FPR@95%TPR for instances of GAN and diffusion models.
arXiv Detail & Related papers (2025-08-06T12:17:38Z) - One Video to Steal Them All: 3D-Printing IP Theft through Optical Side-Channels [6.082508741253127]
We show that an adversary with access to video recordings of the 3D printing process can reverse engineer the underlying 3D print instructions.<n>Our model tracks the printer nozzle movements during the printing process and maps the corresponding trajectory into G-code instructions.<n>It identifies the correct parameters such as feed rate and extrusion rate, enabling successful intellectual property theft.
arXiv Detail & Related papers (2025-06-27T04:34:07Z) - Practitioner Paper: Decoding Intellectual Property: Acoustic and Magnetic Side-channel Attack on a 3D Printer [3.0832643041058607]
This work demonstrates the feasibility of reconstructing G-codes by performing side-channel attacks on a 3D printer.
By training models using Gradient Boosted Decision Trees, our prediction results for each axial movement, stepper, nozzle, and rotor speed achieve high accuracy.
We effectively deploy the model in a real-world examination, achieving a Mean Tendency Error (MTE) of 4.47% on a plain G-code design.
arXiv Detail & Related papers (2024-11-16T21:05:25Z) - LLM-3D Print: Large Language Models To Monitor and Control 3D Printing [6.349503549199403]
Industry 4.0 has revolutionized manufacturing by driving digitalization and shifting the paradigm toward additive manufacturing (AM)<n>FDM, a key AM technology, enables the creation of highly customized, cost-effective products with minimal material waste through layer-by-layer extrusion.<n>We present a process monitoring and control framework that leverages pre-trained Large Language Models (LLMs) alongside 3D printers to detect and address printing defects.
arXiv Detail & Related papers (2024-08-26T14:38:19Z) - Stop Stealing My Data: Sanitizing Stego Channels in 3D Printing Design Files [56.96539046813698]
steganographic channels can allow additional data to be embedded within the STL files without changing the printed model.
This paper addresses this security threat by designing and evaluating a emphsanitizer that erases hidden content where steganographic channels might exist.
arXiv Detail & Related papers (2024-04-07T23:28:35Z) - Secure Information Embedding in Forensic 3D Fingerprinting [15.196378932114518]
We introduce SIDE, a novel fingerprinting framework tailored for 3D printing.<n>SIDE addresses the adversarial challenges of 3D print by offering both secure information embedding and extraction.
arXiv Detail & Related papers (2024-03-07T22:03:46Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - Hourglass Tokenizer for Efficient Transformer-Based 3D Human Pose Estimation [73.31524865643709]
We present a plug-and-play pruning-and-recovering framework, called Hourglass Tokenizer (HoT), for efficient transformer-based 3D pose estimation from videos.
Our HoDT begins with pruning pose tokens of redundant frames and ends with recovering full-length tokens, resulting in a few pose tokens in the intermediate transformer blocks.
Our method can achieve both high efficiency and estimation accuracy compared to the original VPT models.
arXiv Detail & Related papers (2023-11-20T18:59:51Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.