Understanding the Security Landscape of Embedded Non-Volatile Memories: A Comprehensive Survey
- URL: http://arxiv.org/abs/2505.17253v1
- Date: Thu, 22 May 2025 20:02:48 GMT
- Title: Understanding the Security Landscape of Embedded Non-Volatile Memories: A Comprehensive Survey
- Authors: Zakia Tamanna Tisha, Ujjwal Guin,
- Abstract summary: This paper examines the security aspects of eNVMs by discussing the reasons for vulnerabilities in specific memories from an architectural point of view.<n>The paper also explores a broad spectrum of security threats to eNVMs, including physical attacks and logical threats like information leakage, denial-of-service, and thermal attacks.
- Score: 3.061662434597098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The modern semiconductor industry requires memory solutions that can keep pace with the high-speed demands of high-performance computing. Embedded non-volatile memories (eNVMs) address these requirements by offering faster access to stored data at an improved computational throughput and efficiency. Furthermore, these technologies offer numerous appealing features, including limited area-energy-runtime budget and data retention capabilities. Among these, the data retention feature of eNVMs has garnered particular interest within the semiconductor community. Although this property allows eNVMs to retain data even in the absence of a continuous power supply, it also introduces some vulnerabilities, prompting security concerns. These concerns have sparked increased interest in examining the broader security implications associated with eNVM technologies. This paper examines the security aspects of eNVMs by discussing the reasons for vulnerabilities in specific memories from an architectural point of view. Additionally, this paper extensively reviews eNVM-based security primitives, such as physically unclonable functions and true random number generators, as well as techniques like logic obfuscation. The paper also explores a broad spectrum of security threats to eNVMs, including physical attacks such as side-channel attacks, fault injection, and probing, as well as logical threats like information leakage, denial-of-service, and thermal attacks. Finally, the paper presents a study of publication trends in the eNVM domain since the early 2000s, reflecting the rising momentum and research activity in this field.
Related papers
- AI-Powered Anomaly Detection with Blockchain for Real-Time Security and Reliability in Autonomous Vehicles [1.1797787239802762]
We develop a new framework that combines the power of Artificial Intelligence (AI) for real-time anomaly detection with blockchain technology to detect and prevent any malicious activity.<n>This framework employs a decentralized platform for securely storing sensor data and anomaly alerts in a blockchain ledger for data incorruptibility and authenticity.<n>This makes the AV system more resilient to attacks from both cyberspace and hardware component failure.
arXiv Detail & Related papers (2025-05-10T12:53:28Z) - A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments [55.60375624503877]
Model Extraction Attacks (MEAs) threaten modern machine learning systems by enabling adversaries to steal models, exposing intellectual property and training data.<n>This survey is motivated by the urgent need to understand how the unique characteristics of cloud, edge, and federated deployments shape attack vectors and defense requirements.<n>We systematically examine the evolution of attack methodologies and defense mechanisms across these environments, demonstrating how environmental factors influence security strategies in critical sectors such as autonomous vehicles, healthcare, and financial services.
arXiv Detail & Related papers (2025-02-22T03:46:50Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - VMGuard: Reputation-Based Incentive Mechanism for Poisoning Attack Detection in Vehicular Metaverse [52.57251742991769]
vehicular Metaverse guard (VMGuard) protects vehicular Metaverse systems from data poisoning attacks.<n>VMGuard implements a reputation-based incentive mechanism to assess the trustworthiness of participating SIoT devices.<n>Our system ensures that reliable SIoT devices, previously missclassified, are not barred from participating in future rounds of the market.
arXiv Detail & Related papers (2024-12-05T17:08:20Z) - A Survey of Unikernel Security: Insights and Trends from a Quantitative Analysis [0.0]
This research presents a quantitative methodology using TF-IDF to analyze the focus of security discussions within unikernel research literature.
Memory Protection Extensions and Data Execution Prevention were the least frequently occurring topics, while SGX was the most frequent topic.
arXiv Detail & Related papers (2024-06-04T00:51:12Z) - HW-V2W-Map: Hardware Vulnerability to Weakness Mapping Framework for
Root Cause Analysis with GPT-assisted Mitigation Suggestion [3.847218857469107]
We presentHW-V2W-Map Framework, which is a Machine Learning (ML) framework focusing on hardware vulnerabilities and Internet of Things (IoT) security.
The architecture that we have proposed incorporates an Ontology-driven Storytelling framework, which automates the process of updating the Ontology.
Our proposed framework utilized Generative Pre-trained Transformer (GPT) Large Language Models (LLMs) to provide mitigation suggestions.
arXiv Detail & Related papers (2023-12-21T02:14:41Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Edge Intelligence Over the Air: Two Faces of Interference in Federated
Learning [95.31679010587473]
Federated edge learning is envisioned as the bedrock of enabling intelligence in next-generation wireless networks.
This article provides a comprehensive overview of the positive and negative effects of interference on over-the-air-based edge learning systems.
arXiv Detail & Related papers (2023-06-17T09:04:48Z) - Physical Side-Channel Attacks on Embedded Neural Networks: A Survey [0.32634122554913997]
Neural Networks (NN) are expected to become ubiquitous in IoT systems by transforming all sorts of real-world applications.
embedded NN implementations are vulnerable to Side-Channel Analysis (SCA) attacks.
This paper surveys state-of-the-art physical SCA attacks relative to the implementation of embedded NNs on micro-controllers and FPGAs.
arXiv Detail & Related papers (2021-10-21T17:18:52Z) - Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework [13.573645522781712]
Deep neural networks (DNNs) and spiking neural networks (SNNs) offer state-of-the-art results on resource-constrained edge devices.
These systems are required to maintain correct functionality under diverse security and reliability threats.
This paper first discusses existing approaches to address energy efficiency, reliability, and security issues at different system layers.
arXiv Detail & Related papers (2021-09-20T20:22:56Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.