A Classification-by-Retrieval Framework for Few-Shot Anomaly Detection to Detect API Injection Attacks
- URL: http://arxiv.org/abs/2405.11247v2
- Date: Sun, 15 Sep 2024 15:31:13 GMT
- Title: A Classification-by-Retrieval Framework for Few-Shot Anomaly Detection to Detect API Injection Attacks
- Authors: Udi Aharon, Ran Dubin, Amit Dvir, Chen Hajaj,
- Abstract summary: We propose a novel unsupervised few-shot anomaly detection framework composed of two main parts.
First, we train a dedicated generic language model for API based on FastText embedding.
Next, we use Approximate Nearest Neighbor search in a classification-by-retrieval approach.
- Score: 9.693391036125908
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Application Programming Interface (API) Injection attacks refer to the unauthorized or malicious use of APIs, which are often exploited to gain access to sensitive data or manipulate online systems for illicit purposes. Identifying actors that deceitfully utilize an API poses a demanding problem. Although there have been notable advancements and contributions in the field of API security, there remains a significant challenge when dealing with attackers who use novel approaches that don't match the well-known payloads commonly seen in attacks. Also, attackers may exploit standard functionalities unconventionally and with objectives surpassing their intended boundaries. Thus, API security needs to be more sophisticated and dynamic than ever, with advanced computational intelligence methods, such as machine learning models that can quickly identify and respond to abnormal behavior. In response to these challenges, we propose a novel unsupervised few-shot anomaly detection framework composed of two main parts: First, we train a dedicated generic language model for API based on FastText embedding. Next, we use Approximate Nearest Neighbor search in a classification-by-retrieval approach. Our framework allows for training a fast, lightweight classification model using only a few examples of normal API requests. We evaluated the performance of our framework using the CSIC 2010 and ATRDF 2023 datasets. The results demonstrate that our framework improves API attack detection accuracy compared to the state-of-the-art (SOTA) unsupervised anomaly detection baselines.
Related papers
- PARIS: A Practical, Adaptive Trace-Fetching and Real-Time Malicious Behavior Detection System [6.068607290592521]
We propose adaptive trace fetching, lightweight, real-time malicious behavior detection system.
Specifically, we monitor malicious behavior with Event Tracing for Windows (ETW) and learn to selectively collect maliciousness-related APIs or call stacks.
As a result, we can monitor a wider range of APIs and detect more intricate attack behavior.
arXiv Detail & Related papers (2024-11-02T14:52:04Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Few-Shot API Attack Detection: Overcoming Data Scarcity with GAN-Inspired Learning [9.035212370386846]
This paper proposes a novel few-shot detection approach motivated by Natural Language Processing (NLP) and advanced Generative Adrialversa Network (GAN)-inspired techniques.
Our method enhances the contextual understanding of API requests, leading to improved anomaly detection compared to traditional methods.
arXiv Detail & Related papers (2024-05-18T11:10:45Z) - Prompt Engineering-assisted Malware Dynamic Analysis Using GPT-4 [45.935748395725206]
We introduce a prompt engineering-assisted malware dynamic analysis using GPT-4.
In this method, GPT-4 is employed to create explanatory text for each API call within the API sequence.
BERT is used to obtain the representation of the text, from which we derive the representation of the API sequence.
arXiv Detail & Related papers (2023-12-13T17:39:44Z) - Kairos: Practical Intrusion Detection and Investigation using
Whole-system Provenance [4.101641763092759]
Provenance graphs are structured audit logs that describe the history of a system's execution.
We identify four common dimensions that drive the development of provenance-based intrusion detection systems (PIDSes)
We present KAIROS, the first PIDS that simultaneously satisfies the desiderata in all four dimensions.
arXiv Detail & Related papers (2023-08-09T16:04:55Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Using Anomaly Detection to Detect Poisoning Attacks in Federated
Learning Applications [2.978389704820221]
Adversarial attacks such as poisoning attacks have attracted the attention of many machine learning researchers.
Traditionally, poisoning attacks attempt to inject adversarial training data in order to manipulate the trained model.
In federated learning (FL), data poisoning attacks can be generalized to model poisoning attacks, which cannot be detected by simpler methods due to the lack of access to local training data by the detector.
We propose a novel framework for detecting poisoning attacks in FL, which employs a reference model based on a public dataset and an auditor model to detect malicious updates.
arXiv Detail & Related papers (2022-07-18T10:10:45Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.