OffRAMPS: An FPGA-based Intermediary for Analysis and Modification of Additive Manufacturing Control Systems
- URL: http://arxiv.org/abs/2404.15446v1
- Date: Tue, 23 Apr 2024 18:39:50 GMT
- Title: OffRAMPS: An FPGA-based Intermediary for Analysis and Modification of Additive Manufacturing Control Systems
- Authors: Jason Blocklove, Md Raz, Prithwish Basu Roy, Hammond Pearce, Prashanth Krishnamurthy, Farshad Khorrami, Ramesh Karri,
- Abstract summary: Cybersecurity threats in Additive Manufacturing (AM) are an increasing concern.
AM is now being used for parts in the aerospace, transportation, and medical domains.
"OFFRAMPS" platform is based on the open-source 3D printer control board "RAMPS"
- Score: 21.84830062424073
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Cybersecurity threats in Additive Manufacturing (AM) are an increasing concern as AM adoption continues to grow. AM is now being used for parts in the aerospace, transportation, and medical domains. Threat vectors which allow for part compromise are particularly concerning, as any failure in these domains would have life-threatening consequences. A major challenge to investigation of AM part-compromises comes from the difficulty in evaluating and benchmarking both identified threat vectors as well as methods for detecting adversarial actions. In this work, we introduce a generalized platform for systematic analysis of attacks against and defenses for 3D printers. Our "OFFRAMPS" platform is based on the open-source 3D printer control board "RAMPS." OFFRAMPS allows analysis, recording, and modification of all control signals and I/O for a 3D printer. We show the efficacy of OFFRAMPS by presenting a series of case studies based on several Trojans, including ones identified in the literature, and show that OFFRAMPS can both emulate and detect these attacks, i.e., it can both change and detect arbitrary changes to the g-code print commands.
Related papers
- LaserEscape: Detecting and Mitigating Optical Probing Attacks [5.4511018094405905]
We introduce LaserEscape, the first fully digital and FPGA-compatible countermeasure to detect and mitigate optical probing attacks.
LaserEscape incorporates digital delay-based sensors to reliably detect the physical alteration on the fabric caused by laser beam irradiations in real time.
As a response to the attack, LaserEscape deploys real-time hiding approaches using randomized hardware reconfigurability.
arXiv Detail & Related papers (2024-05-06T16:49:11Z) - Defense against Joint Poison and Evasion Attacks: A Case Study of DERMS [2.632261166782093]
We propose the first framework of IDS that is robust against joint poisoning and evasion attacks.
We verify the robustness of our method on the IEEE-13 bus feeder model against a diverse set of poisoning and evasion attack scenarios.
arXiv Detail & Related papers (2024-05-05T16:24:30Z) - Stumbling Blocks: Stress Testing the Robustness of Machine-Generated
Text Detectors Under Attacks [48.32116554279759]
We study the robustness of popular machine-generated text detectors under attacks from diverse categories: editing, paraphrasing, prompting, and co-generating.
Our attacks assume limited access to the generator LLMs, and we compare the performance of detectors on different attacks under different budget levels.
Averaging all detectors, the performance drops by 35% across all attacks.
arXiv Detail & Related papers (2024-02-18T16:36:00Z) - Survey of Security Issues in Memristor-based Machine Learning Accelerators for RF Analysis [0.0]
We explore security aspects of a new computing paradigm that combines novel memristors and traditional CMOS.
Memristors have different properties than traditional CMOS which can potentially be exploited by attackers.
Mixed signal approximate computing model has different vulnerabilities than traditional digital implementations.
arXiv Detail & Related papers (2023-12-01T21:44:35Z) - X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item
Detection [113.10386151761682]
Adversarial attacks targeting texture-free X-ray images are underexplored.
In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection.
We propose X-Adv to generate physically printable metals that act as an adversarial agent capable of deceiving X-ray detectors.
arXiv Detail & Related papers (2023-02-19T06:31:17Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - A Human-in-the-Middle Attack against Object Detection Systems [4.764637544913963]
We propose a novel hardware attack inspired by Man-in-the-Middle attacks in cryptography.
This attack generates a Universal Adversarial Perturbations (UAP) and injects the perturbation between the USB camera and the detection system.
These findings raise serious concerns for applications of deep learning models in safety-critical systems, such as autonomous driving.
arXiv Detail & Related papers (2022-08-15T13:21:41Z) - EVHA: Explainable Vision System for Hardware Testing and Assurance -- An
Overview [0.0]
We propose Explainable Vision System for Hardware Testing and Assurance (EVHA) in this work.
EVHA can detect the smallest possible change to a design in a low-cost, accurate, and fast manner.
This article provides an overview on the design, development, implementation, and analysis of our defense system.
arXiv Detail & Related papers (2022-07-20T02:58:46Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.