Execution at RISC: Stealth JOP Attacks on RISC-V Applications
- URL: http://arxiv.org/abs/2307.12648v1
- Date: Mon, 24 Jul 2023 09:39:21 GMT
- Title: Execution at RISC: Stealth JOP Attacks on RISC-V Applications
- Authors: Lo\"ic Buckwell and Olivier Gilles and Daniel Gracia P\'erez and
Nikolai Kosmatov
- Abstract summary: This paper demonstrates that RISC-V is sensible to Jump-Oriented Programming, a class of complex code-reuse attacks.
A proof-of-concept attack is implemented on an embedded web server compiled for RISC-V.
- Score: 0.5543867614999909
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: RISC-V is a recently developed open instruction set architecture gaining a
lot of attention. To achieve a lasting security on these systems and design
efficient countermeasures, a better understanding of vulnerabilities to novel
and potential future attacks is mandatory. This paper demonstrates that RISC-V
is sensible to Jump-Oriented Programming, a class of complex code-reuse
attacks. We provide an analysis of new dispatcher gadgets we discovered, and
show how they can be used together in order to build a stealth attack,
bypassing existing protections. A proof-of-concept attack is implemented on an
embedded web server compiled for RISC-V, in which we introduced a
vulnerability, allowing an attacker to remotely read an arbitrary file from the
host machine.
Related papers
- The Hidden Dangers of Browsing AI Agents [0.0]
This paper presents a comprehensive security evaluation of such agents, focusing on systemic vulnerabilities across multiple architectural layers.<n>Our work outlines the first end-to-end threat model for browsing agents and provides actionable guidance for securing their deployment in real-world environments.
arXiv Detail & Related papers (2025-05-19T13:10:29Z) - LlamaFirewall: An open source guardrail system for building secure AI agents [0.5603362829699733]
Large language models (LLMs) have evolved from simple chatbots into autonomous agents capable of performing complex tasks.<n>Given the higher stakes and the absence of deterministic solutions to mitigate these risks, there is a critical need for a real-time guardrail monitor.<n>We introduce LlamaFirewall, an open-source security focused guardrail framework.
arXiv Detail & Related papers (2025-05-06T14:34:21Z) - DoomArena: A framework for Testing AI Agents Against Evolving Security Threats [84.94654617852322]
We present DoomArena, a security evaluation framework for AI agents.
It is a plug-in framework and integrates easily into realistic agentic frameworks.
It is modular and decouples the development of attacks from details of the environment in which the agent is deployed.
arXiv Detail & Related papers (2025-04-18T20:36:10Z) - Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense [90.71884758066042]
Large vision-language models (LVLMs) introduce a unique vulnerability: susceptibility to malicious attacks via visual inputs.
We propose ESIII (Embedding Security Instructions Into Images), a novel methodology for transforming the visual space from a source of vulnerability into an active defense mechanism.
arXiv Detail & Related papers (2025-03-14T17:39:45Z) - Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks [88.84977282952602]
A high volume of recent ML security literature focuses on attacks against aligned large language models (LLMs)
In this paper, we analyze security and privacy vulnerabilities that are unique to LLM agents.
We conduct a series of illustrative attacks on popular open-source and commercial agents, demonstrating the immediate practical implications of their vulnerabilities.
arXiv Detail & Related papers (2025-02-12T17:19:36Z) - HijackRAG: Hijacking Attacks against Retrieval-Augmented Large Language Models [18.301965456681764]
We reveal a novel vulnerability, the retrieval prompt hijack attack (HijackRAG)
HijackRAG enables attackers to manipulate the retrieval mechanisms of RAG systems by injecting malicious texts into the knowledge database.
We propose both black-box and white-box attack strategies tailored to different levels of the attacker's knowledge.
arXiv Detail & Related papers (2024-10-30T09:15:51Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Looking Beyond IoCs: Automatically Extracting Attack Patterns from
External CTI [3.871148938060281]
LADDER is a framework that can extract text-based attack patterns from cyberthreat intelligence reports at scale.
We present several use cases to demonstrate the application of LADDER in real-world scenarios.
arXiv Detail & Related papers (2022-11-01T12:16:30Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - GRAVITAS: Graphical Reticulated Attack Vectors for Internet-of-Things
Aggregate Security [5.918387680589584]
Internet-of-Things (IoT) and cyber-physical systems (CPSs) may consist of thousands of devices connected in a complex network topology.
We describe a comprehensive risk management system, called GRAVITAS, for IoT/CPS that can identify undiscovered attack vectors.
arXiv Detail & Related papers (2021-05-31T19:35:23Z) - SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things
and Cyber-Physical Systems based on Machine Learning [5.265938973293016]
Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are increasingly being deployed across multiple functionalities.
These devices are inherently not secure across their comprehensive software, hardware, and network stacks.
We present an innovative technique for detecting unknown system vulnerabilities, managing these vulnerabilities, and improving incident response.
arXiv Detail & Related papers (2021-01-07T22:01:30Z) - Unleashing the Tiger: Inference Attacks on Split Learning [2.492607582091531]
We introduce general attack strategies targeting the reconstruction of clients' private training sets.
A malicious server can actively hijack the learning process of the distributed model.
We demonstrate our attack is able to overcome recently proposed defensive techniques.
arXiv Detail & Related papers (2020-12-04T15:41:00Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.