The Cost of Performance: Breaking ThreadX with Kernel Object Masquerading Attacks
- URL: http://arxiv.org/abs/2504.19486v1
- Date: Mon, 28 Apr 2025 05:01:35 GMT
- Title: The Cost of Performance: Breaking ThreadX with Kernel Object Masquerading Attacks
- Authors: Xinhui Shao, Zhen Ling, Yue Zhang, Huaiyu Yan, Yumeng Wei, Lan Luo, Zixia Liu, Junzhou Luo, Xinwen Fu,
- Abstract summary: We show that popular real-time operating systems (RTOSs) lack essential security protections.<n>We identify a performance optimization practice in ThreadX that introduces security vulnerabilities, allowing for the circumvention of parameter sanitization processes.<n>We introduce an automated approach involving under-constrained symbolic execution to identify the Kernel Object Masquerading (KOM) Attack.
- Score: 16.54210795506388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Microcontroller-based IoT devices often use embedded real-time operating systems (RTOSs). Vulnerabilities in these embedded RTOSs can lead to compromises of those IoT devices. Despite the significance of security protections, the absence of standardized security guidelines results in various levels of security risk across RTOS implementations. Our initial analysis reveals that popular RTOSs such as FreeRTOS lack essential security protections. While Zephyr OS and ThreadX are designed and implemented with essential security protections, our closer examination uncovers significant differences in their implementations of system call parameter sanitization. We identify a performance optimization practice in ThreadX that introduces security vulnerabilities, allowing for the circumvention of parameter sanitization processes. Leveraging this insight, we introduce a novel attack named the Kernel Object Masquerading (KOM) Attack (as the attacker needs to manipulate one or multiple kernel objects through carefully selected system calls to launch the attack), demonstrating how attackers can exploit these vulnerabilities to access sensitive fields within kernel objects, potentially leading to unauthorized data manipulation, privilege escalation, or system compromise. We introduce an automated approach involving under-constrained symbolic execution to identify the KOM attacks and to understand the implications. Experimental results demonstrate the feasibility of KOM attacks on ThreadX-powered platforms. We reported our findings to the vendors, who recognized the vulnerabilities, with Amazon and Microsoft acknowledging our contribution on their websites.
Related papers
- Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms [0.9091225937132784]
We reveal a critical control-plane attack surface to traditional data-plane vulnerabilities.
We introduce Constrained Decoding Attack, a novel jailbreak class that weaponizes structured output constraints to bypass safety mechanisms.
Our findings identify a critical security blind spot in current LLM architectures and urge a paradigm shift in LLM safety to address control-plane vulnerabilities.
arXiv Detail & Related papers (2025-03-31T15:08:06Z) - CRAFT: Characterizing and Root-Causing Fault Injection Threats at Pre-Silicon [4.83186491286234]
This work presents a comprehensive methodology for conducting controlled fault injection attacks at the pre-silicon level.<n>As the driving application, we use the clock glitch attacks in AI/ML applications for critical misclassification.
arXiv Detail & Related papers (2025-03-05T20:17:46Z) - VMGuard: Reputation-Based Incentive Mechanism for Poisoning Attack Detection in Vehicular Metaverse [52.57251742991769]
vehicular Metaverse guard (VMGuard) protects vehicular Metaverse systems from data poisoning attacks.<n>VMGuard implements a reputation-based incentive mechanism to assess the trustworthiness of participating SIoT devices.<n>Our system ensures that reliable SIoT devices, previously missclassified, are not barred from participating in future rounds of the market.
arXiv Detail & Related papers (2024-12-05T17:08:20Z) - Lost and Found in Speculation: Hybrid Speculative Vulnerability Detection [15.258238125090667]
We introduce Specure, a novel pre-silicon verification method composing hardware fuzzing with Information Flow Tracking (IFT) to address speculative execution leakages.
Specure identifies previously overlooked speculative execution vulnerabilities on the RISC-V BOOM processor and explores the vulnerability search space 6.45x faster than existing fuzzing techniques.
arXiv Detail & Related papers (2024-10-29T21:42:06Z) - Security Testbed for Preempting Attacks against Supercomputing Infrastructure [1.9097277955963794]
This paper describes a security testbed embedded in live traffic of a supercomputer at the National Center for Supercomputing Applications.
The objective is to demonstrate attack textitpreemption, i.e., stopping system compromise and data breaches at petascale supercomputers.
arXiv Detail & Related papers (2024-09-15T03:42:47Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - Fight Hardware with Hardware: System-wide Detection and Mitigation of Side-Channel Attacks using Performance Counters [45.493130647468675]
We present a kernel-level infrastructure that allows system-wide detection of malicious applications attempting to exploit cache-based side-channel attacks.
This infrastructure relies on hardware performance counters to collect information at runtime from all applications running on the machine.
High-level detection metrics are derived from these measurements to maximize the likelihood of promptly detecting a malicious application.
arXiv Detail & Related papers (2024-02-18T15:45:38Z) - Is Your Kettle Smarter Than a Hacker? A Scalable Tool for Assessing Replay Attack Vulnerabilities on Consumer IoT Devices [1.5612101323427952]
ENISA and NIST security guidelines emphasize the importance of enabling default local communication for safety and reliability.
We propose a tool, named REPLIOT, able to test whether a replay attack is successful or not, without prior knowledge of the target devices.
We find that 75% of the remaining devices are vulnerable to replay attacks with REPLIOT having a detection accuracy of 0.98-1.
arXiv Detail & Related papers (2024-01-22T18:24:41Z) - Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models [79.0183835295533]
We introduce the first benchmark for indirect prompt injection attacks, named BIPIA, to assess the risk of such vulnerabilities.<n>Our analysis identifies two key factors contributing to their success: LLMs' inability to distinguish between informational context and actionable instructions, and their lack of awareness in avoiding the execution of instructions within external content.<n>We propose two novel defense mechanisms-boundary awareness and explicit reminder-to address these vulnerabilities in both black-box and white-box settings.
arXiv Detail & Related papers (2023-12-21T01:08:39Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.