Security Testbed for Preempting Attacks against Supercomputing Infrastructure
- URL: http://arxiv.org/abs/2409.09602v2
- Date: Sat, 5 Oct 2024 23:54:13 GMT
- Title: Security Testbed for Preempting Attacks against Supercomputing Infrastructure
- Authors: Phuong Cao, Zbigniew Kalbarczyk, Ravishankar Iyer,
- Abstract summary: This paper describes a security testbed embedded in live traffic of a supercomputer at the National Center for Supercomputing Applications.
The objective is to demonstrate attack textitpreemption, i.e., stopping system compromise and data breaches at petascale supercomputers.
- Score: 1.9097277955963794
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Securing HPC has a unique threat model. Untrusted, malicious code exploiting the concentrated computing power may exert an outsized impact on the shared, open-networked environment in HPC, unlike well-isolated VM tenants in public clouds. Therefore, preempting attacks targeting supercomputing systems before damage remains the top security priority. The main challenge is that noisy attack attempts and unreliable alerts often mask \emph{real attacks}, causing permanent damages such as system integrity violations and data breaches. This paper describes a security testbed embedded in live traffic of a supercomputer at the National Center for Supercomputing Applications (NCSA). The objective is to demonstrate attack \textit{preemption}, i.e., stopping system compromise and data breaches at petascale supercomputers. Deployment of our testbed at NCSA enables the following key contributions: 1) Insights from characterizing unique \textit{attack patterns} found in real security logs of more than 200 security incidents curated in the past two decades at NCSA. 2) Deployment of an attack visualization tool to illustrate the challenges of identifying real attacks in HPC environments and to support security operators in interactive attack analyses. 3) Demonstrate the utility of the testbed by running novel models, such as Factor-Graph-based models, to preempt a real-world ransomware family.
Related papers
- Cabin: Confining Untrusted Programs within Confidential VMs [13.022056111810599]
Confidential computing safeguards sensitive computations from untrusted clouds.
CVMs often come with large and vulnerable operating system kernels, making them susceptible to attacks exploiting kernel weaknesses.
This study proposes Cabin, an isolated execution framework within guest VM utilizing the latest AMD SEV-SNP technology.
arXiv Detail & Related papers (2024-07-17T06:23:28Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Got Root? A Linux Priv-Esc Benchmark [3.11537581064266]
Linux systems are integral to the infrastructure of modern computing environments.
A benchmark set of vulnerable systems is of high importance to evaluate the effectiveness of privilege-escalation techniques.
arXiv Detail & Related papers (2024-05-03T14:04:51Z) - Towards a Near-real-time Protocol Tunneling Detector based on Machine Learning Techniques [0.0]
We present a protocol tunneling detector prototype which inspects, in near real time, a company's network traffic using machine learning techniques.
The detector monitors unencrypted network flows and extracts features to detect possible occurring attacks and anomalies.
Results show 97.1% overall accuracy and an F1-score equals to 95.6%.
arXiv Detail & Related papers (2023-09-22T09:08:43Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Security Orchestration, Automation, and Response Engine for Deployment
of Behavioural Honeypots [0.0]
Security Orchestration, Automation, and Response (SOAR) Engine dynamically deploys custom honeypots inside the internal network infrastructure based on the attacker's behavior.
The presence of botnet traffic and DDOS attacks on the honeypots in the network is detected, along with a malware collection system.
arXiv Detail & Related papers (2022-01-14T07:57:12Z) - Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
Networks [5.231607386266116]
We study the realistic threat of deployment-stage backdoor attacks on deep learning models.
We propose the first gray-box and physically realizable weights attack algorithm for backdoor injection.
Our results suggest the effectiveness and practicality of the proposed attack algorithm.
arXiv Detail & Related papers (2021-11-25T08:25:27Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.