Crypto Miner Attack: GPU Remote Code Execution Attacks
- URL: http://arxiv.org/abs/2502.10439v1
- Date: Sun, 09 Feb 2025 19:26:47 GMT
- Title: Crypto Miner Attack: GPU Remote Code Execution Attacks
- Authors: Ariel Szabo, Uzy Hadad,
- Abstract summary: Remote Code Execution (RCE) exploits pose a significant threat to AI and ML systems.<n>This paper focuses on RCE attacks leveraging deserialization vulnerabilities and custom layers, such as Lambda layers.<n>We demonstrate an attack that utilizes these vulnerabilities to deploy a crypto miner on a GPU.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Remote Code Execution (RCE) exploits pose a significant threat to AI and ML systems, particularly in GPU-accelerated environments where the computational power of GPUs can be misused for malicious purposes. This paper focuses on RCE attacks leveraging deserialization vulnerabilities and custom layers, such as TensorFlow Lambda layers, which are often overlooked due to the complexity of monitoring GPU workloads. These vulnerabilities enable attackers to execute arbitrary code, blending malicious activity seamlessly into expected model behavior and exploiting GPUs for unauthorized tasks such as cryptocurrency mining. Unlike traditional CPU-based attacks, the parallel processing nature of GPUs and their high resource utilization make runtime detection exceptionally challenging. In this work, we provide a comprehensive examination of RCE exploits targeting GPUs, demonstrating an attack that utilizes these vulnerabilities to deploy a crypto miner on a GPU. We highlight the technical intricacies of such attacks, emphasize their potential for significant financial and computational costs, and propose strategies for mitigation. By shedding light on this underexplored attack vector, we aim to raise awareness and encourage the adoption of robust security measures in GPU-driven AI and ML systems, with an emphasis on static and model scanning as an easier way to detect exploits.
Related papers
- Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks [88.84977282952602]
A high volume of recent ML security literature focuses on attacks against aligned large language models (LLMs)<n>In this paper, we analyze security and privacy vulnerabilities that are unique to LLM agents.<n>We conduct a series of illustrative attacks on popular open-source and commercial agents, demonstrating the immediate practical implications of their vulnerabilities.
arXiv Detail & Related papers (2025-02-12T17:19:36Z) - Underload: Defending against Latency Attacks for Object Detectors on Edge Devices [21.359326502877487]
A new class of latency attacks are reported recently targeting the real-time processing capability of object detectors.<n>We take an initial attempt to defend against this attack via background-attentive adversarial training.<n>Experiments demonstrate the defense effectiveness of restoring real-time processing capability from $13$ FPS to $43$ FPS.
arXiv Detail & Related papers (2024-12-03T05:00:26Z) - Poison-splat: Computation Cost Attack on 3D Gaussian Splatting [90.88713193520917]
We reveal a significant security vulnerability that has been largely overlooked in 3DGS.
The adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training.
Such a computation cost attack is achieved by addressing a bi-level optimization problem.
arXiv Detail & Related papers (2024-10-10T17:57:29Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Behavior-Based Detection of GPU Cryptojacking [0.0]
This article considers question of GPU cryptojacking detection.
We propose complex exposure mechanism based on GPU load by an application and graphic card RAM consumption.
It was tested in a controlled virtual machine environment with 80% successful detection rate against selected set of GPU cryptojacking samples and 20% false positive rate against selected number of legitimate GPU-heavy applications.
arXiv Detail & Related papers (2024-08-26T18:11:53Z) - Confidential Computing on Heterogeneous CPU-GPU Systems: Survey and Future Directions [21.66522545303459]
In recent years, the widespread informatization and rapid data explosion have increased the demand for high-performance heterogeneous systems.
The combination of CPU and GPU is particularly popular due to its versatility.
Advances in privacy-preserving techniques, especially hardware-based Trusted Execution Environments (TEEs) offer effective protection for GPU applications.
arXiv Detail & Related papers (2024-08-21T13:14:45Z) - Whispering Pixels: Exploiting Uninitialized Register Accesses in Modern GPUs [6.1255640691846285]
We showcase the existence of a vulnerability on products of 3 major vendors - Apple, NVIDIA and Qualcomm.
This vulnerability poses unique challenges to an adversary due to opaque scheduling and register remapping algorithms.
We implement information leakage attacks on intermediate data of Convolutional Neural Networks (CNNs) and present the attack's capability to leak and reconstruct the output of Large Language Models (LLMs)
arXiv Detail & Related papers (2024-01-16T23:36:48Z) - WebGPU-SPY: Finding Fingerprints in the Sandbox through GPU Cache Attacks [0.7400926717561453]
We present a new attack vector for microarchitectural attacks in web browsers.
We develop a cache side channel attack on the compute stack of the GPU that spies on victim activities.
We demonstrate that GPU-based cache attacks can achieve a precision of 90 for website fingerprinting of 100 top websites.
arXiv Detail & Related papers (2024-01-09T04:21:43Z) - Pre-trained Trojan Attacks for Visual Recognition [106.13792185398863]
Pre-trained vision models (PVMs) have become a dominant component due to their exceptional performance when fine-tuned for downstream tasks.
We propose the Pre-trained Trojan attack, which embeds backdoors into a PVM, enabling attacks across various downstream vision tasks.
We highlight the challenges posed by cross-task activation and shortcut connections in successful backdoor attacks.
arXiv Detail & Related papers (2023-12-23T05:51:40Z) - FusionAI: Decentralized Training and Deploying LLMs with Massive
Consumer-Level GPUs [57.12856172329322]
We envision a decentralized system unlocking the potential vast untapped consumer-level GPU.
This system faces critical challenges, including limited CPU and GPU memory, low network bandwidth, the variability of peer and device heterogeneity.
arXiv Detail & Related papers (2023-09-03T13:27:56Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.