Formally Discovering and Reproducing Network Protocols Vulnerabilities
- URL: http://arxiv.org/abs/2503.01538v1
- Date: Mon, 03 Mar 2025 13:50:20 GMT
- Title: Formally Discovering and Reproducing Network Protocols Vulnerabilities
- Authors: Christophe Crochet, John Aoga, Axel Legay,
- Abstract summary: This paper introduces Network Attack-centric Compositional Testing (NACT), a novel methodology to discover new vulnerabilities in network protocols.<n>NACT integrates composable attacker specifications, formal specification mutations, and randomized constraint-solving techniques to generate sophisticated attack scenarios and test cases.<n>By supporting cross-protocol testing within a black-box testing framework, NACT provides a versatile approach to improve the security of network protocols.
- Score: 1.7965226171103972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid evolution of cyber threats has increased the need for robust methods to discover vulnerabilities in increasingly complex and diverse network protocols. This paper introduces Network Attack-centric Compositional Testing (NACT), a novel methodology designed to discover new vulnerabilities in network protocols and create scenarios to reproduce these vulnerabilities through attacker models. NACT integrates composable attacker specifications, formal specification mutations, and randomized constraint-solving techniques to generate sophisticated attack scenarios and test cases. The methodology enables comprehensive testing of both single-protocol and multi-protocol interactions. Through case studies involving a custom minimalist protocol (MiniP) and five widely used QUIC implementations, NACT is shown to effectively identify, reproduce, and find new real-world vulnerabilities such as version negotiation abuse. Additionally, by comparing the current and older versions of these QUIC implementations, NACT demonstrates its ability to detect both persistent vulnerabilities and regressions. Finally, by supporting cross-protocol testing within a black-box testing framework, NACT provides a versatile approach to improve the security of network protocols.
Related papers
- Multichannel Steganography: A Provably Secure Hybrid Steganographic Model for Secure Communication [0.0]
This study introduces a novel steganographic model that synthesizes Steganography by Cover Modification (CMO) and Steganography by Cover Synthesis (CSY)<n>Building upon this model, a refined Steganographic Communication Protocol is proposed, enhancing resilience against sophisticated threats.<n>This study explores the practicality and adaptability of the model to both constrained environments like SMS banking and resource-rich settings such as blockchain transactions.
arXiv Detail & Related papers (2025-01-08T13:58:07Z) - Learning in Multiple Spaces: Few-Shot Network Attack Detection with Metric-Fused Prototypical Networks [47.18575262588692]
We propose a novel Multi-Space Prototypical Learning framework tailored for few-shot attack detection.<n>By leveraging Polyak-averaged prototype generation, the framework stabilizes the learning process and effectively adapts to rare and zero-day attacks.<n> Experimental results on benchmark datasets demonstrate that MSPL outperforms traditional approaches in detecting low-profile and novel attack types.
arXiv Detail & Related papers (2024-12-28T00:09:46Z) - Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
We propose Authenticated Cyclic Redundancy Integrity Check (ACRIC)
ACRIC preserves backward compatibility without requiring additional hardware and is protocol agnostic.
We show that ACRIC offers robust security with minimal transmission overhead ( 1 ms)
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - CryptoFormalEval: Integrating LLMs and Formal Verification for Automated Cryptographic Protocol Vulnerability Detection [41.94295877935867]
We introduce a benchmark to assess the ability of Large Language Models to autonomously identify vulnerabilities in new cryptographic protocols.
We created a dataset of novel, flawed, communication protocols and designed a method to automatically verify the vulnerabilities found by the AI agents.
arXiv Detail & Related papers (2024-11-20T14:16:55Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network
Intrusion Detection [0.0]
This paper consolidates and summarizes the state-of-the-art adversarial learning approaches that can generate realistic examples.
It defines the fundamental properties that are required for an adversarial example to be realistic.
It provides guidelines for researchers to ensure that their future experiments are adequate for a real communication network.
arXiv Detail & Related papers (2023-08-13T17:23:36Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.