Survey of Security Issues in Memristor-based Machine Learning Accelerators for RF Analysis
- URL: http://arxiv.org/abs/2312.00942v1
- Date: Fri, 1 Dec 2023 21:44:35 GMT
- Title: Survey of Security Issues in Memristor-based Machine Learning Accelerators for RF Analysis
- Authors: William Lillis, Max Cohen Hoffing, Wayne Burleson,
- Abstract summary: We explore security aspects of a new computing paradigm that combines novel memristors and traditional CMOS.
Memristors have different properties than traditional CMOS which can potentially be exploited by attackers.
Mixed signal approximate computing model has different vulnerabilities than traditional digital implementations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore security aspects of a new computing paradigm that combines novel memristors and traditional Complimentary Metal Oxide Semiconductor (CMOS) to construct a highly efficient analog and/or digital fabric that is especially well-suited to Machine Learning (ML) inference processors for Radio Frequency (RF) signals. Memristors have different properties than traditional CMOS which can potentially be exploited by attackers. In addition, the mixed signal approximate computing model has different vulnerabilities than traditional digital implementations. However both the memristor and the ML computation can be leveraged to create security mechanisms and countermeasures ranging from lightweight cryptography, identifiers (e.g. Physically Unclonable Functions (PUFs), fingerprints, and watermarks), entropy sources, hardware obfuscation and leakage/attack detection methods. Three different threat models are proposed: 1) Supply Chain, 2) Physical Attacks, and 3) Remote Attacks. For each threat model, potential vulnerabilities and defenses are identified. This survey reviews a variety of recent work from the hardware and ML security literature and proposes open problems for both attack and defense. The survey emphasizes the growing area of RF signal analysis and identification in terms of the commercial space, as well as military applications and threat models. We differ from other other recent surveys that target ML in general, neglecting RF applications.
Related papers
- Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - FedMADE: Robust Federated Learning for Intrusion Detection in IoT Networks Using a Dynamic Aggregation Method [7.842334649864372]
Internet of Things (IoT) devices across multiple sectors has escalated serious network security concerns.
Traditional Machine Learning (ML)-based Intrusion Detection Systems (IDSs) for cyber-attack classification require data transmission from IoT devices to a centralized server for traffic analysis, raising severe privacy concerns.
We introduce FedMADE, a novel dynamic aggregation method, which clusters devices by their traffic patterns and aggregates local models based on their contributions towards overall performance.
arXiv Detail & Related papers (2024-08-13T18:42:34Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - A Unified Hardware-based Threat Detector for AI Accelerators [12.96840649714218]
We design UniGuard, a novel unified and non-intrusive detection methodology to safeguard FPGA-based AI accelerators.
We employ a Time-to-Digital Converter to capture power fluctuations and train a supervised machine learning model to identify various types of threats.
arXiv Detail & Related papers (2023-11-28T10:55:02Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Adversarial attacks and defenses on ML- and hardware-based IoT device
fingerprinting and identification [0.0]
This work proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification.
Previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices.
adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks.
arXiv Detail & Related papers (2022-12-30T13:11:35Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.