Adaptive Exploit Generation against Security Devices and Security APIs
- URL: http://arxiv.org/abs/2410.01568v1
- Date: Wed, 2 Oct 2024 14:05:44 GMT
- Title: Adaptive Exploit Generation against Security Devices and Security APIs
- Authors: Robert Künnemann, Julian Biehl,
- Abstract summary: We show how to automatically derive proof-of-concept exploits against Security APIs using formal methods.
We extend the popular protocol verifier ProVerif with a language-agnostic template mechanism.
- Score: 3.706222947143855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Proof-of-concept exploits help demonstrate software vulnerability beyond doubt and communicate attacks to non-experts. But exploits can be configuration-specific, for example when in Security APIs, where keys are set up specifically for the application and enterprise the API serves. In this work, we show how to automatically derive proof-of-concept exploits against Security APIs using formal methods. We extend the popular protocol verifier ProVerif with a language-agnostic template mechanism. Employing program snippets attached to steps in the model, we can transform attack traces (which ProVerif typically finds automatically) into programs. Our method is general, flexible and convenient. We demonstrate its use for the W3C Web Cryptography API, for PKCS#11 and for the YubiHSM2, providing the first formal model of the latter.
Related papers
- EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - A Classification-by-Retrieval Framework for Few-Shot Anomaly Detection to Detect API Injection Attacks [9.693391036125908]
We propose a novel unsupervised few-shot anomaly detection framework composed of two main parts.
First, we train a dedicated generic language model for API based on FastText embedding.
Next, we use Approximate Nearest Neighbor search in a classification-by-retrieval approach.
arXiv Detail & Related papers (2024-05-18T10:15:31Z) - An Investigation into Misuse of Java Security APIs by Large Language Models [9.453671056356837]
This paper systematically assesses ChatGPT's trustworthiness in code generation for security API use cases in Java.
Around 70% of the code instances across 30 attempts per task contain security API misuse, with 20 distinct misuse types identified.
For roughly half of the tasks, this rate reaches 100%, indicating that there is a long way to go before developers can rely on ChatGPT to securely implement security API code.
arXiv Detail & Related papers (2024-04-04T22:52:41Z) - CodeChameleon: Personalized Encryption Framework for Jailbreaking Large
Language Models [49.60006012946767]
We propose CodeChameleon, a novel jailbreak framework based on personalized encryption tactics.
We conduct extensive experiments on 7 Large Language Models, achieving state-of-the-art average Attack Success Rate (ASR)
Remarkably, our method achieves an 86.6% ASR on GPT-4-1106.
arXiv Detail & Related papers (2024-02-26T16:35:59Z) - Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment [56.2017039028998]
Fine-tuning of Language-Model-as-a-Service (LM) introduces new threats, particularly against the Fine-tuning based Jailbreak Attack (FJAttack)
We propose the Backdoor Enhanced Safety Alignment method inspired by an analogy with the concept of backdoor attacks.
Our comprehensive experiments demonstrate that through the Backdoor Enhanced Safety Alignment with adding as few as 11 safety examples, the maliciously finetuned LLMs will achieve similar safety performance as the original aligned models without harming the benign performance.
arXiv Detail & Related papers (2024-02-22T21:05:18Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Exploiting Novel GPT-4 APIs [3.8710514689921296]
We explore three new functionalities exposed in the GPT-4 APIs: fine-tuning, function calling and knowledge retrieval.
We find that fine-tuning a model on as few as 15 harmful examples or 100 benign examples can remove core safeguards from GPT-4.
We find that GPT-4 Assistants readily divulge the function call schema and can be made to execute arbitrary function calls.
arXiv Detail & Related papers (2023-12-21T21:22:41Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Conditional Generative Adversarial Network for keystroke presentation
attack [0.0]
We propose to study a new approach aiming to deploy a presentation attack towards a keystroke authentication system.
Our idea is to use Conditional Generative Adversarial Networks (cGAN) for generating synthetic keystroke data that can be used for impersonating an authorized user.
Results indicate that the cGAN can effectively generate keystroke dynamics patterns that can be used for deceiving keystroke authentication systems.
arXiv Detail & Related papers (2022-12-16T12:45:16Z) - OpenAPI Specification Extended Security Scheme: A method to reduce the prevalence of Broken Object Level Authorization [0.0]
API Security is a topic for concern given the absence of standardized authorization in the OpenAPI standard.
This paper examines the number one vulnerability in API Security: Broken Object Level Authorization(BOLA)
arXiv Detail & Related papers (2022-12-13T14:28:06Z) - Mind the GAP: Security & Privacy Risks of Contact Tracing Apps [75.7995398006171]
Google and Apple have jointly provided an API for exposure notification in order to implement decentralized contract tracing apps using Bluetooth Low Energy.
We demonstrate that in real-world scenarios the GAP design is vulnerable to (i) profiling and possibly de-anonymizing persons, and (ii) relay-based wormhole attacks that basically can generate fake contacts.
arXiv Detail & Related papers (2020-06-10T16:05:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.