Machine Learning for Offensive Security: Sandbox Classification Using
Decision Trees and Artificial Neural Networks
- URL: http://arxiv.org/abs/2007.06763v1
- Date: Tue, 14 Jul 2020 01:45:40 GMT
- Title: Machine Learning for Offensive Security: Sandbox Classification Using
Decision Trees and Artificial Neural Networks
- Authors: Will Pearce, Nick Landers, and Nancy Fulda
- Abstract summary: Machine learning techniques are not reserved for organizations with deep pockets and massive data repositories.
This paper aims to give unique insight into how a real offensive team is using machine learning to support offensive operations.
- Score: 1.758684872705242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The merits of machine learning in information security have primarily focused
on bolstering defenses. However, machine learning (ML) techniques are not
reserved for organizations with deep pockets and massive data repositories; the
democratization of ML has lead to a rise in the number of security teams using
ML to support offensive operations. The research presented here will explore
two models that our team has used to solve a single offensive task, detecting a
sandbox. Using process list data gathered with phishing emails, we will
demonstrate the use of Decision Trees and Artificial Neural Networks to
successfully classify sandboxes, thereby avoiding unsafe execution. This paper
aims to give unique insight into how a real offensive team is using machine
learning to support offensive operations.
Related papers
- Functional Encryption in Secure Neural Network Training: Data Leakage and Practical Mitigations [45.88028371034407]
We present an attack on neural networks that uses Functional Encryption (FE) for secure training over encrypted data.<n>One approach ensures security without relying on encryption, while the other uses function-hiding inner-product techniques.
arXiv Detail & Related papers (2025-09-25T19:56:05Z) - Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation [88.78166077081912]
We introduce a multimodal unlearning benchmark, UnLOK-VQA, and an attack-and-defense framework to evaluate methods for deleting specific multimodal knowledge from MLLMs.<n>Our results show multimodal attacks outperform text- or image-only ones, and that the most effective defense removes answer information from internal model states.
arXiv Detail & Related papers (2025-05-01T01:54:00Z) - How Secure is Forgetting? Linking Machine Unlearning to Machine Learning Attacks [1.6874375111244329]
We provide a structured analysis of security threats in Machine Learning (ML) and their implications for Machine Unlearning (MU)
We investigate four major attack classes, namely, Backdoor Attacks, Membership Inference Attacks (MIA), Adversarial Attacks, and Inversion Attacks.
We identify open challenges, including ethical considerations, and explore promising future research directions.
arXiv Detail & Related papers (2025-03-26T05:49:34Z) - Cryptanalysis via Machine Learning Based Information Theoretic Metrics [58.96805474751668]
We propose two novel applications of machine learning (ML) algorithms to perform cryptanalysis on any cryptosystem.
These algorithms can be readily applied in an audit setting to evaluate the robustness of a cryptosystem.
We show that our classification model correctly identifies the encryption schemes that are not IND-CPA secure, such as DES, RSA, and AES ECB, with high accuracy.
arXiv Detail & Related papers (2025-01-25T04:53:36Z) - Machine Unlearning using Forgetting Neural Networks [0.0]
This paper presents a new approach to machine unlearning using forgetting neural networks (FNN)
FNNs are neural networks with specific forgetting layers, that take inspiration from the processes involved when a human brain forgets.
We report our results on the MNIST handwritten digit recognition and fashion datasets.
arXiv Detail & Related papers (2024-10-29T02:52:26Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Do You Trust Your Model? Emerging Malware Threats in the Deep Learning
Ecosystem [37.650342256199096]
We introduce MaleficNet 2.0, a technique to embed self-extracting, self-executing malware in neural networks.
MaleficNet 2.0 injection technique is stealthy, does not degrade the performance of the model, and is robust against removal techniques.
We implement a proof-of-concept self-extracting neural network malware using MaleficNet 2.0, demonstrating the practicality of the attack against a widely adopted machine learning framework.
arXiv Detail & Related papers (2024-03-06T10:27:08Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks [0.0]
The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
arXiv Detail & Related papers (2022-03-11T10:58:24Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Robust and Verifiable Information Embedding Attacks to Deep Neural
Networks via Error-Correcting Codes [81.85509264573948]
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier.
In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool.
In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods.
arXiv Detail & Related papers (2020-10-26T17:42:42Z) - Security of Distributed Machine Learning: A Game-Theoretic Approach to
Design Secure DSVM [31.480769801354413]
This work aims to develop secure distributed algorithms to protect the learning from data poisoning and network attacks.
We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (SVMs) and an attacker who is capable of modifying training data and labels.
The numerical results show that distributed SVM is prone to fail in different types of attacks, and their impact has a strong dependence on the network structure and attack capabilities.
arXiv Detail & Related papers (2020-03-08T18:54:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.