Do Not Trust Power Management: A Survey on Internal Energy-based Attacks Circumventing Trusted Execution Environments Security Properties
- URL: http://arxiv.org/abs/2405.15537v3
- Date: Fri, 04 Oct 2024 12:47:26 GMT
- Title: Do Not Trust Power Management: A Survey on Internal Energy-based Attacks Circumventing Trusted Execution Environments Security Properties
- Authors: Gwenn Le Gonidec, Maria Méndez Real, Guillaume Bouffard, Jean-Christophe Prévotet,
- Abstract summary: Since 2015, a new class of software-enabled hardware attacks leveraging energy management mechanisms has emerged.
Their aim is to bypass TEE security guarantees and expose sensitive information such as cryptographic keys.
This article presents the first comprehensive knowledge survey of these attacks, along with an evaluation of literature countermeasures.
- Score: 0.21665864340363084
- License:
- Abstract: Over the past few years, several research groups have introduced innovative hardware designs for Trusted Execution Environments (TEEs), aiming to secure applications against potentially compromised privileged software, including the kernel. Since 2015, a new class of software-enabled hardware attacks leveraging energy management mechanisms has emerged. These internal energy-based attacks comprise fault, side-channel and covert channel attacks. Their aim is to bypass TEE security guarantees and expose sensitive information such as cryptographic keys. They have increased in prevalence in the past few years. Popular TEE implementations, such as ARM TrustZone and Intel SGX, incorporate countermeasures against these attacks. However, these countermeasures either hinder the capabilities of the power management mechanisms or have been shown to provide insufficient system protection. This article presents the first comprehensive knowledge survey of these attacks, along with an evaluation of literature countermeasures. We believe that this study will spur further community efforts towards this increasingly important type of attacks.
Related papers
- Smart Grid Security: A Verified Deep Reinforcement Learning Framework to Counter Cyber-Physical Attacks [2.159496955301211]
Smart grids are vulnerable to strategically crafted cyber-physical attacks.
Malicious attacks can manipulate power demands using high-wattage Internet of Things (IoT) botnet devices.
Grid operators overlook potential scenarios of cyber-physical attacks during their design phase.
We propose a safe Deep Reinforcement Learning (DRL)-based framework for mitigating attacks on smart grids.
arXiv Detail & Related papers (2024-09-24T05:26:20Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Evaluation Framework for Quantum Security Risk Assessment: A Comprehensive Study for Quantum-Safe Migration [0.03749861135832072]
The rise of large-scale quantum computing poses a significant threat to traditional cryptographic security measures.
Quantum attacks undermine current asymmetric cryptographic algorithms, rendering them ineffective.
This study explores the challenges of migrating to quantum-safe cryptographic states.
arXiv Detail & Related papers (2024-04-12T04:18:58Z) - A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid [62.91192307098067]
This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
arXiv Detail & Related papers (2024-03-11T02:47:21Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Classification of cyber attacks on IoT and ubiquitous computing devices [49.1574468325115]
This paper provides a classification of IoT malware.
Major targets and used exploits for attacks are identified and referred to the specific malware.
The majority of current IoT attacks continue to be of comparably low effort and level of sophistication and could be mitigated by existing technical measures.
arXiv Detail & Related papers (2023-12-01T16:10:43Z) - Trust-based Approaches Towards Enhancing IoT Security: A Systematic Literature Review [3.0969632359049473]
This research paper presents a systematic literature review on the Trust-based cybersecurity security approaches for IoT.
We highlighted the common trust-based mitigation techniques in existence for dealing with these threats.
Several open issues were highlighted, and future research directions presented.
arXiv Detail & Related papers (2023-11-20T12:21:35Z) - Generalized Power Attacks against Crypto Hardware using Long-Range Deep Learning [6.409047279789011]
GPAM is a deep-learning system for power side-channel analysis.
It generalizes across multiple cryptographic algorithms, implementations, and side-channel countermeasures.
We demonstrate GPAM's capability by successfully attacking four hardened hardware-accelerated elliptic-curve digital-signature implementations.
arXiv Detail & Related papers (2023-06-12T17:16:26Z) - SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things
and Cyber-Physical Systems based on Machine Learning [5.265938973293016]
Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are increasingly being deployed across multiple functionalities.
These devices are inherently not secure across their comprehensive software, hardware, and network stacks.
We present an innovative technique for detecting unknown system vulnerabilities, managing these vulnerabilities, and improving incident response.
arXiv Detail & Related papers (2021-01-07T22:01:30Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.