SAFARI: a Scalable Air-gapped Framework for Automated Ransomware Investigation
- URL: http://arxiv.org/abs/2504.07868v1
- Date: Thu, 10 Apr 2025 15:44:13 GMT
- Title: SAFARI: a Scalable Air-gapped Framework for Automated Ransomware Investigation
- Authors: Tommaso Compagnucci, Franco Callegati, Saverio Giallorenzo, Andrea Melis, Simone Melloni, Alessandro Vannini,
- Abstract summary: SAFARI is an open-source framework designed for safe and efficient ransomware analysis.<n>We demonstrate SAFARI's capabilities by building a proof-of-concept implementation and using it to run two case studies.
- Score: 37.762832978020896
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ransomware poses a significant threat to individuals and organisations, compelling tools to investigate its behaviour and the effectiveness of mitigations. To answer this need, we present SAFARI, an open-source framework designed for safe and efficient ransomware analysis. SAFARI's design emphasises scalability, air-gapped security, and automation, democratising access to safe ransomware investigation tools and fostering collaborative efforts. SAFARI leverages virtualisation, Infrastructure-as-Code, and OS-agnostic task automation to create isolated environments for controlled ransomware execution and analysis. The framework enables researchers to profile ransomware behaviour and evaluate mitigation strategies through automated, reproducible experiments. We demonstrate SAFARI's capabilities by building a proof-of-concept implementation and using it to run two case studies. The first analyses five renowned ransomware strains (including WannaCry and LockBit) to identify their encryption patterns and file targeting strategies. The second evaluates Ranflood, a contrast tool which we use against three dangerous strains. Our results provide insights into ransomware behaviour and the effectiveness of countermeasures, showcasing SAFARI's potential to advance ransomware research and defence development.
Related papers
- Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.<n>We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.<n>As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware Attack Simulations [7.361316528368866]
This paper proposes a novel approach utilizing reinforcement learning (RL) to simulate ransomware attacks.
By training an RL agent in a simulated environment mirroring real-world networks, effective attack strategies can be learned quickly.
Experimental results on a 152-host example network confirm the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-06-25T14:16:40Z) - EGAN: Evolutional GAN for Ransomware Evasion [0.0]
Adversarial Training is a proven defense strategy against adversarial malware.
This work proposes an attack framework, EGAN, to address this limitation.
arXiv Detail & Related papers (2024-05-20T17:52:40Z) - Ransomware Detection Dynamics: Insights and Implications [0.0]
This research investigates the utilization of a feature selection algorithm for distinguishing ransomware-related and benign transactions in Bitcoin (BTC) and United States Dollar (USD)
We propose a set of novel features designed to capture the distinct characteristics of ransomware activity within the cryptocurrency ecosystem.
Through rigorous experimentation and evaluation, we demonstrate the effectiveness of our feature set in accurately extracting BTC and USD transactions.
arXiv Detail & Related papers (2024-02-07T05:36:06Z) - Ransomware Detection and Classification using Machine Learning [7.573297026523597]
This study uses the XGBoost and Random Forest (RF) algorithms to detect and classify ransomware attacks.
The models are evaluated on a dataset of ransomware attacks and demonstrate their effectiveness in accurately detecting and classifying ransomware.
arXiv Detail & Related papers (2023-11-05T18:16:53Z) - Crypto-Ransomware and Their Defenses: In-depth Behavioral Characterization, Discussion of Deployability, and New Insights [5.994215456058968]
We review 117 published ransomware defense works, categorize them by the level they are implemented at, and discuss the deployability.
To provide more insights, we quantitively characterize the runtime behaviors of real-world ransomware samples.
Our findings help the field understand the deployability of ransomware defenses and create more effective, practical solutions.
arXiv Detail & Related papers (2023-06-04T06:27:17Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.