Enhancing TinyML Security: Study of Adversarial Attack Transferability
- URL: http://arxiv.org/abs/2407.11599v2
- Date: Thu, 18 Jul 2024 05:49:46 GMT
- Title: Enhancing TinyML Security: Study of Adversarial Attack Transferability
- Authors: Parin Shah, Yuvaraj Govindarajulu, Pavan Kulkarni, Manojkumar Parmar,
- Abstract summary: This research delves into the adversarial vulnerabilities of AI models on resource-constrained embedded hardware.
Our findings reveal that adversarial attacks from powerful host machines could be transferred to smaller, less secure devices like ESP32 and Raspberry Pi.
This illustrates that adversarial attacks could be extended to tiny devices, underscoring vulnerabilities, and emphasizing the necessity for reinforced security measures in TinyML deployments.
- Score: 0.35998666903987897
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The recent strides in artificial intelligence (AI) and machine learning (ML) have propelled the rise of TinyML, a paradigm enabling AI computations at the edge without dependence on cloud connections. While TinyML offers real-time data analysis and swift responses critical for diverse applications, its devices' intrinsic resource limitations expose them to security risks. This research delves into the adversarial vulnerabilities of AI models on resource-constrained embedded hardware, with a focus on Model Extraction and Evasion Attacks. Our findings reveal that adversarial attacks from powerful host machines could be transferred to smaller, less secure devices like ESP32 and Raspberry Pi. This illustrates that adversarial attacks could be extended to tiny devices, underscoring vulnerabilities, and emphasizing the necessity for reinforced security measures in TinyML deployments. This exploration enhances the comprehension of security challenges in TinyML and offers insights for safeguarding sensitive data and ensuring device dependability in AI-powered edge computing settings.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Mitigating Backdoor Threats to Large Language Models: Advancement and Challenges [46.032173498399885]
Large Language Models (LLMs) have significantly impacted various domains, including Web search, healthcare, and software development.
As these models scale, they become more vulnerable to cybersecurity risks, particularly backdoor attacks.
arXiv Detail & Related papers (2024-09-30T06:31:36Z) - Development of an Edge Resilient ML Ensemble to Tolerate ICS Adversarial Attacks [0.9437165725355702]
We build a resilient edge machine learning architecture that is designed to withstand adversarial attacks.
The reML is based on the Resilient DDDAS paradigm, Moving Target Defense (MTD) theory, and TinyML.
The proposed approach is power-efficient and privacy-preserving and, therefore, can be deployed on power-constrained devices to enhance ICS security.
arXiv Detail & Related papers (2024-09-26T19:37:37Z) - FedMADE: Robust Federated Learning for Intrusion Detection in IoT Networks Using a Dynamic Aggregation Method [7.842334649864372]
Internet of Things (IoT) devices across multiple sectors has escalated serious network security concerns.
Traditional Machine Learning (ML)-based Intrusion Detection Systems (IDSs) for cyber-attack classification require data transmission from IoT devices to a centralized server for traffic analysis, raising severe privacy concerns.
We introduce FedMADE, a novel dynamic aggregation method, which clusters devices by their traffic patterns and aggregates local models based on their contributions towards overall performance.
arXiv Detail & Related papers (2024-08-13T18:42:34Z) - Detecting and Understanding Vulnerabilities in Language Models via Mechanistic Interpretability [44.99833362998488]
Large Language Models (LLMs) have shown impressive performance across a wide range of tasks.
LLMs in particular are known to be vulnerable to adversarial attacks, where an imperceptible change to the input can mislead the output of the model.
We propose a method, based on Mechanistic Interpretability (MI) techniques, to guide this process.
arXiv Detail & Related papers (2024-07-29T09:55:34Z) - On TinyML and Cybersecurity: Electric Vehicle Charging Infrastructure Use Case [17.1066653907873]
This paper reviews challenges of TinyML techniques, such as power consumption, limited memory, and computational constraints.
It also explores potential solutions to these challenges, such as energy harvesting, computational optimization techniques, and transfer learning for privacy preservation.
It presents an experimental case study that enhances cybersecurity in EVCI using TinyML, evaluated against traditional ML in terms of reduced delay and memory usage.
arXiv Detail & Related papers (2024-04-25T01:57:11Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Adversarial Machine Learning Threat Analysis in Open Radio Access
Networks [37.23982660941893]
The Open Radio Access Network (O-RAN) is a new, open, adaptive, and intelligent RAN architecture.
In this paper, we present a systematic adversarial machine learning threat analysis for the O-RAN.
arXiv Detail & Related papers (2022-01-16T17:01:38Z) - Robust Machine Learning Systems: Challenges, Current Trends,
Perspectives, and the Road Ahead [24.60052335548398]
Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT)
They are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy.
This paper summarizes the prominent vulnerabilities of modern ML systems, highlights successful defenses and mitigation techniques against these vulnerabilities.
arXiv Detail & Related papers (2021-01-04T20:06:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.