One Video to Steal Them All: 3D-Printing IP Theft through Optical Side-Channels
- URL: http://arxiv.org/abs/2506.21897v1
- Date: Fri, 27 Jun 2025 04:34:07 GMT
- Title: One Video to Steal Them All: 3D-Printing IP Theft through Optical Side-Channels
- Authors: Twisha Chattopadhyay, Fabricio Ceschin, Marco E. Garza, Dymytriy Zyunkin, Animesh Chhotaray, Aaron P. Stebner, Saman Zonouz, Raheem Beyah,
- Abstract summary: We show that an adversary with access to video recordings of the 3D printing process can reverse engineer the underlying 3D print instructions.<n>Our model tracks the printer nozzle movements during the printing process and maps the corresponding trajectory into G-code instructions.<n>It identifies the correct parameters such as feed rate and extrusion rate, enabling successful intellectual property theft.
- Score: 6.082508741253127
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The 3D printing industry is rapidly growing and increasingly adopted across various sectors including manufacturing, healthcare, and defense. However, the operational setup often involves hazardous environments, necessitating remote monitoring through cameras and other sensors, which opens the door to cyber-based attacks. In this paper, we show that an adversary with access to video recordings of the 3D printing process can reverse engineer the underlying 3D print instructions. Our model tracks the printer nozzle movements during the printing process and maps the corresponding trajectory into G-code instructions. Further, it identifies the correct parameters such as feed rate and extrusion rate, enabling successful intellectual property theft. To validate this, we design an equivalence checker that quantitatively compares two sets of 3D print instructions, evaluating their similarity in producing objects alike in shape, external appearance, and internal structure. Unlike simple distance-based metrics such as normalized mean square error, our equivalence checker is both rotationally and translationally invariant, accounting for shifts in the base position of the reverse engineered instructions caused by different camera positions. Our model achieves an average accuracy of 90.87 percent and generates 30.20 percent fewer instructions compared to existing methods, which often produce faulty or inaccurate prints. Finally, we demonstrate a fully functional counterfeit object generated by reverse engineering 3D print instructions from video.
Related papers
- RoboTAG: End-to-end Robot Configuration Estimation via Topological Alignment Graph [62.270763554624615]
Estimating robot pose from a monocular RGB image is a challenge in robotics and computer vision.<n>Existing methods typically build networks on top of 2D visual backbones and depend heavily on labeled data for training.<n>We propose Robot Topological Alignment Graph (RoboTAG), which incorporates a 3D branch to inject 3D priors while enabling co-evolution of the 2D and 3D representations.
arXiv Detail & Related papers (2025-11-11T00:49:15Z) - Turning Hearsay into Discovery: Industrial 3D Printer Side Channel Information Translated to Stealing the Object Design [46.740145853674875]
We show for the first time that side-channel attacks are a serious threat to industrial grade 3D printers.<n>We reconstruct the 3D printed model solely from the collected power side-channel data.
arXiv Detail & Related papers (2025-09-22T19:46:21Z) - Generating 3D-Consistent Videos from Unposed Internet Photos [68.944029293283]
We train a scalable, 3D-aware video model without any 3D annotations such as camera parameters.
Our results suggest that we can scale up scene-level 3D learning using only 2D data such as videos and multiview internet photos.
arXiv Detail & Related papers (2024-11-20T18:58:31Z) - Practitioner Paper: Decoding Intellectual Property: Acoustic and Magnetic Side-channel Attack on a 3D Printer [3.0832643041058607]
This work demonstrates the feasibility of reconstructing G-codes by performing side-channel attacks on a 3D printer.
By training models using Gradient Boosted Decision Trees, our prediction results for each axial movement, stepper, nozzle, and rotor speed achieve high accuracy.
We effectively deploy the model in a real-world examination, achieving a Mean Tendency Error (MTE) of 4.47% on a plain G-code design.
arXiv Detail & Related papers (2024-11-16T21:05:25Z) - LLM-3D Print: Large Language Models To Monitor and Control 3D Printing [6.349503549199403]
Industry 4.0 has revolutionized manufacturing by driving digitalization and shifting the paradigm toward additive manufacturing (AM)<n>FDM, a key AM technology, enables the creation of highly customized, cost-effective products with minimal material waste through layer-by-layer extrusion.<n>We present a process monitoring and control framework that leverages pre-trained Large Language Models (LLMs) alongside 3D printers to detect and address printing defects.
arXiv Detail & Related papers (2024-08-26T14:38:19Z) - Secure Information Embedding in Forensic 3D Fingerprinting [15.196378932114518]
We introduce SIDE, a novel fingerprinting framework tailored for 3D printing.<n>SIDE addresses the adversarial challenges of 3D print by offering both secure information embedding and extraction.
arXiv Detail & Related papers (2024-03-07T22:03:46Z) - Semi-Siamese Network for Robust Change Detection Across Different
Domains with Applications to 3D Printing [17.176767333354636]
We present a novel Semi-Siamese deep learning model for defect detection in 3D printing processes.
Our model is designed to enable comparison of heterogeneous images from different domains while being robust against perturbations in the imaging setup.
Using our model, defect localization predictions can be made in less than half a second per layer using a standard MacBook Pro while achieving an F1-score of more than 0.9.
arXiv Detail & Related papers (2022-12-16T17:02:55Z) - 3D Vision with Transformers: A Survey [114.86385193388439]
The success of the transformer architecture in natural language processing has triggered attention in the computer vision field.
We present a systematic and thorough review of more than 100 transformers methods for different 3D vision tasks.
We discuss transformer design in 3D vision, which allows it to process data with various 3D representations.
arXiv Detail & Related papers (2022-08-08T17:59:11Z) - Consistent 3D Hand Reconstruction in Video via self-supervised Learning [67.55449194046996]
We present a method for reconstructing accurate and consistent 3D hands from a monocular video.
detected 2D hand keypoints and the image texture provide important cues about the geometry and texture of the 3D hand.
We propose $rm S2HAND$, a self-supervised 3D hand reconstruction model.
arXiv Detail & Related papers (2022-01-24T09:44:11Z) - Towards Smart Monitored AM: Open Source in-Situ Layer-wise 3D Printing
Image Anomaly Detection Using Histograms of Oriented Gradients and a
Physics-Based Rendering Engine [0.0]
This study presents an open source method for detecting 3D printing anomalies by comparing images of printed layers from a stationary monocular camera with G-code-based reference images of an ideal process generated with Blender, a physics rendering engine.
Recognition of visual deviations was accomplished by analyzing the similarity of histograms of oriented gradients (HOG) of local image areas.
The implementation of this novel method does not require preliminary data for training, and the greatest efficiency can be achieved with the mass production of parts by either additive or subtractive manufacturing of the same geometric shape.
arXiv Detail & Related papers (2021-11-04T09:27:10Z) - Unsupervised Learning of Visual 3D Keypoints for Control [104.92063943162896]
Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations.
We propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner.
These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space.
arXiv Detail & Related papers (2021-06-14T17:59:59Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data [77.34069717612493]
We present a novel method for monocular hand shape and pose estimation at unprecedented runtime performance of 100fps.
This is enabled by a new learning based architecture designed such that it can make use of all the sources of available hand training data.
It features a 3D hand joint detection module and an inverse kinematics module which regresses not only 3D joint positions but also maps them to joint rotations in a single feed-forward pass.
arXiv Detail & Related papers (2020-03-21T03:51:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.