Practitioner Paper: Decoding Intellectual Property: Acoustic and Magnetic Side-channel Attack on a 3D Printer
- URL: http://arxiv.org/abs/2411.10887v1
- Date: Sat, 16 Nov 2024 21:05:25 GMT
- Title: Practitioner Paper: Decoding Intellectual Property: Acoustic and Magnetic Side-channel Attack on a 3D Printer
- Authors: Amirhossein Jamarani, Yazhou Tu, Xiali Hei,
- Abstract summary: This work demonstrates the feasibility of reconstructing G-codes by performing side-channel attacks on a 3D printer.
By training models using Gradient Boosted Decision Trees, our prediction results for each axial movement, stepper, nozzle, and rotor speed achieve high accuracy.
We effectively deploy the model in a real-world examination, achieving a Mean Tendency Error (MTE) of 4.47% on a plain G-code design.
- Score: 3.0832643041058607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread accessibility and ease of use of additive manufacturing (AM), widely recognized as 3D printing, has put Intellectual Property (IP) at great risk of theft. As 3D printers emit acoustic and magnetic signals while printing, the signals can be captured and analyzed using a smartphone for the purpose of IP attack. This is an instance of physical-to-cyber exploitation, as there is no direct contact with the 3D printer. Although cyber vulnerabilities in 3D printers are becoming more apparent, the methods for protecting IPs are yet to be fully investigated. The threat scenarios in previous works have mainly rested on advanced recording devices for data collection and entailed placing the device very close to the 3D printer. However, our work demonstrates the feasibility of reconstructing G-codes by performing side-channel attacks on a 3D printer using a smartphone from greater distances. By training models using Gradient Boosted Decision Trees, our prediction results for each axial movement, stepper, nozzle, and rotor speed achieve high accuracy, with a mean of 98.80%, without any intrusiveness. We effectively deploy the model in a real-world examination, achieving a Mean Tendency Error (MTE) of 4.47% on a plain G-code design.
Related papers
- Intellectual Property Protection for 3D Gaussian Splatting Assets: A Survey [89.1493370852336]
3D Gaussian Splatting (3DGS) has become a mainstream representation for real-time 3D scene synthesis, enabling applications in virtual and augmented reality, robotics, and 3D content creation.<n>Its rising commercial value and explicit parametric structure raise emerging intellectual property (IP) protection concerns.<n>Current progress remains fragmented, lacking a unified view of the underlying mechanisms, protection paradigms, and robustness challenges.
arXiv Detail & Related papers (2026-02-02T16:27:51Z) - QuietPrint: Protecting 3D Printers Against Acoustic Side-Channel Attacks [0.5729426778193398]
Cyber-attacks targeting the 3D printing process are becoming increasingly common.<n>One major concern is intellectual property (IP) theft, where a malicious attacker gains access to the design file.<n>In this work, we investigate the possibility of IP theft via acoustic side channels and propose a novel method to protect 3D printers.
arXiv Detail & Related papers (2026-02-02T15:04:00Z) - Turning Hearsay into Discovery: Industrial 3D Printer Side Channel Information Translated to Stealing the Object Design [46.740145853674875]
We show for the first time that side-channel attacks are a serious threat to industrial grade 3D printers.<n>We reconstruct the 3D printed model solely from the collected power side-channel data.
arXiv Detail & Related papers (2025-09-22T19:46:21Z) - One Video to Steal Them All: 3D-Printing IP Theft through Optical Side-Channels [6.082508741253127]
We show that an adversary with access to video recordings of the 3D printing process can reverse engineer the underlying 3D print instructions.<n>Our model tracks the printer nozzle movements during the printing process and maps the corresponding trajectory into G-code instructions.<n>It identifies the correct parameters such as feed rate and extrusion rate, enabling successful intellectual property theft.
arXiv Detail & Related papers (2025-06-27T04:34:07Z) - Deciphering GunType Hierarchy through Acoustic Analysis of Gunshot Recordings [72.55205022155394]
Gun violence and mass shootings represent a significant threat to public safety.<n>Current commercial gunshot detection systems, while effective, often come with prohibitive costs.<n>This research explores a cost-effective alternative by leveraging acoustic analysis of gunshot recordings.
arXiv Detail & Related papers (2025-06-25T17:00:21Z) - Poison-splat: Computation Cost Attack on 3D Gaussian Splatting [90.88713193520917]
We reveal a significant security vulnerability that has been largely overlooked in 3DGS.
The adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training.
Such a computation cost attack is achieved by addressing a bi-level optimization problem.
arXiv Detail & Related papers (2024-10-10T17:57:29Z) - BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection [12.987427748635037]
3D object detection plays an important role in autonomous driving; however, its vulnerability to backdoor attacks has become evident.
Existing backdoor attacks against 3D object detection primarily poison 3D LiDAR signals.
We propose an innovative 2D-oriented backdoor attack against LiDAR-camera fusion methods for 3D object detection, named BadFusion.
arXiv Detail & Related papers (2024-05-06T22:02:38Z) - OffRAMPS: An FPGA-based Intermediary for Analysis and Modification of Additive Manufacturing Control Systems [21.84830062424073]
Cybersecurity threats in Additive Manufacturing (AM) are an increasing concern.
AM is now being used for parts in the aerospace, transportation, and medical domains.
"OFFRAMPS" platform is based on the open-source 3D printer control board "RAMPS"
arXiv Detail & Related papers (2024-04-23T18:39:50Z) - NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields [57.617972778377215]
We show how to generate effective 3D representations from posed RGB images.
We pretrain this representation at scale on our proposed curated posed-RGB data, totaling over 1.8 million images.
Our novel self-supervised pretraining for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks.
arXiv Detail & Related papers (2024-04-01T17:59:55Z) - Secure Information Embedding and Extraction in Forensic 3D Fingerprinting [15.196378932114518]
The prevalence of 3D printing poses a significant risk to public safety.
Several approaches have been taken to tag 3D-prints with identifying information.
Known as fingerprints, this information is written into the object using various bit embedding techniques.
arXiv Detail & Related papers (2024-03-07T22:03:46Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - FastPillars: A Deployment-friendly Pillar-based 3D Detector [63.0697065653061]
Existing BEV-based (i.e., Bird Eye View) detectors favor sparse convolutions (known as SPConv) to speed up training and inference.
FastPillars delivers state-of-the-art accuracy on Open dataset with 1.8X speed up and 3.8 mAPH/L2 improvement over CenterPoint (SPConv-based)
arXiv Detail & Related papers (2023-02-05T12:13:27Z) - 3D-EDM: Early Detection Model for 3D-Printer Faults [0.0]
It is difficult to use a 3D printer with accurate calibration.
Previous studies have suggested that these problems can be detected using sensor data and image data with machine learning methods.
Considering actual use in the future, we focus on generating the lightweight early detection model with easily collectable data.
arXiv Detail & Related papers (2022-03-23T02:46:26Z) - Back to Reality: Weakly-supervised 3D Object Detection with Shape-guided
Label Enhancement [93.77156425817178]
We propose a weakly-supervised approach for 3D object detection, which makes it possible to train strong 3D detector with position-level annotations.
Our method, namely Back to Reality (BR), makes use of synthetic 3D shapes to convert the weak labels into fully-annotated virtual scenes.
With less than 5% of the labeling labor, we achieve comparable detection performance with some popular fully-supervised approaches on the widely used ScanNet dataset.
arXiv Detail & Related papers (2022-03-10T08:51:32Z) - Expandable YOLO: 3D Object Detection from RGB-D Images [64.14512458954344]
This paper aims at constructing a light-weight object detector that inputs a depth and a color image from a stereo camera.
By extending the network architecture of YOLOv3 to 3D in the middle, it is possible to output in the depth direction.
Intersection over Uninon (IoU) in 3D space is introduced to confirm the accuracy of region extraction results.
arXiv Detail & Related papers (2020-06-26T07:32:30Z) - 3D Printed Brain-Controlled Robot-Arm Prosthetic via Embedded Deep
Learning from sEMG Sensors [4.901124285608471]
Our work proposes to use transfer learning techniques applied to the Google Inception model to retrain the final layer for surface electromyography (sEMG) classification.
Data have been collected using the Thalmic Labs Myo Armband and used to generate graph images comprised of 8 subplots per image.
Deep learning model, Inception-v3, with transfer learning to train the model for accurate prediction of each on real-time input of new data.
Brain-controlled robot arm was produced using a 3D printer and off-the-shelf hardware to control it.
arXiv Detail & Related papers (2020-05-04T19:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.