An Automated Blackbox Noncompliance Checker for QUIC Server Implementations
- URL: http://arxiv.org/abs/2505.12690v1
- Date: Mon, 19 May 2025 04:28:49 GMT
- Title: An Automated Blackbox Noncompliance Checker for QUIC Server Implementations
- Authors: Kian Kai Ang, Guy Farrelly, Cheryl Pope, Damith C. Ranasinghe,
- Abstract summary: QUICtester is an automated approach for uncovering non-compliant behaviors in the ratified QUIC protocol implementations (RFC 9000/).<n>We used QUICtester to analyze 186 learned models from 19 QUIC implementations under the five security settings and discovered 55 implementation errors.
- Score: 2.9248916859490173
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We develop QUICtester, an automated approach for uncovering non-compliant behaviors in the ratified QUIC protocol implementations (RFC 9000/9001). QUICtester leverages active automata learning to abstract the behavior of a QUIC implementation into a finite state machine (FSM) representation. Unlike prior noncompliance checking methods, to help uncover state dependencies on event timing, QUICtester introduces the idea of state learning with event timing variations, adopting both valid and invalid input configurations, and combinations of security and transport layer parameters during learning. We use pairwise differential analysis of learned behaviour models of tested QUIC implementations to identify non-compliance instances as behaviour deviations in a property-agnostic way. This exploits the existence of the many different QUIC implementations, removing the need for validated, formal models. The diverse implementations act as cross-checking test oracles to discover non-compliance. We used QUICtester to analyze analyze 186 learned models from 19 QUIC implementations under the five security settings and discovered 55 implementation errors. Significantly, the tool uncovered a QUIC specification ambiguity resulting in an easily exploitable DoS vulnerability, led to 5 CVE assignments from developers, and two bug bounties thus far.
Related papers
- AI5GTest: AI-Driven Specification-Aware Automated Testing and Validation of 5G O-RAN Components [1.1879716317856948]
We present AI5GTest -- an AI-powered, specification-aware testing framework.<n>It is designed to automate the validation of O-RAN components.<n>It demonstrates a significant reduction in overall test execution time compared to traditional manual methods.
arXiv Detail & Related papers (2025-06-11T18:49:57Z) - Training Language Models to Generate Quality Code with Program Analysis Feedback [66.0854002147103]
Code generation with large language models (LLMs) is increasingly adopted in production but fails to ensure code quality.<n>We propose REAL, a reinforcement learning framework that incentivizes LLMs to generate production-quality code.
arXiv Detail & Related papers (2025-05-28T17:57:47Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - AI-Based Vulnerability Analysis of NFT Smart Contracts [6.378351117969227]
This study proposes an AI-driven approach to detect vulnerabilities in NFT smart contracts.<n>We collected 16,527 public smart contract codes, classifying them into five vulnerability categories: Risky Mutable Proxy, ERC-721 Reentrancy, Unlimited Minting, Missing Requirements, and Public Burn.<n>A random forest model was implemented to improve robustness through random data/feature sampling and multitree integration.
arXiv Detail & Related papers (2025-04-18T08:55:31Z) - On the Mistaken Assumption of Interchangeable Deep Reinforcement Learning Implementations [53.0667196725616]
Deep Reinforcement Learning (DRL) is a paradigm of artificial intelligence where an agent uses a neural network to learn which actions to take in a given environment.<n>DRL has recently gained traction from being able to solve complex environments like driving simulators, 3D robotic control, and multiplayer-online-battle-arena video games.<n>Numerous implementations of the state-of-the-art algorithms responsible for training these agents, like the Deep Q-Network (DQN) and Proximal Policy Optimization (PPO) algorithms, currently exist.
arXiv Detail & Related papers (2025-03-28T16:25:06Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - A Match Made in Heaven? Matching Test Cases and Vulnerabilities With the VUTECO Approach [4.8556535196652195]
This paper introduces VUTECO, a deep learning-based approach for collecting instances of vulnerability-witnessing tests from Java repositories.<n>VUTECO successfully addresses the Finding task, achieving perfect precision and 0.83 F0.5 score on validated test cases in VUL4J.<n>Despite showing sufficiently good performance for the Matching task, VUTECO failed to retrieve any valid match in the wild.
arXiv Detail & Related papers (2025-02-05T17:02:42Z) - Automatically Adaptive Conformal Risk Control [49.95190019041905]
We propose a methodology for achieving approximate conditional control of statistical risks by adapting to the difficulty of test samples.<n>Our framework goes beyond traditional conditional risk control based on user-provided conditioning events to the algorithmic, data-driven determination of appropriate function classes for conditioning.
arXiv Detail & Related papers (2024-06-25T08:29:32Z) - MOTIF: A tool for Mutation Testing with Fuzzing [3.4742750855568763]
Mutation testing is a desired practice for embedded software running in safety-critical cyber-physical systems.
MOTIF overcomes limitations by leveraging grey-box fuzzing tools to generate unit test cases in C that detect injected faults in mutants.
arXiv Detail & Related papers (2024-06-04T15:12:01Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - Test Generation Strategies for Building Failure Models and Explaining
Spurious Failures [4.995172162560306]
Test inputs fail not only when the system under test is faulty but also when the inputs are invalid or unrealistic.
We propose to build failure models for inferring interpretable rules on test inputs that cause spurious failures.
We show that our proposed surrogate-assisted approach generates failure models with an average accuracy of 83%.
arXiv Detail & Related papers (2023-12-09T18:36:15Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Verifying Quantized Neural Networks using SMT-Based Model Checking [2.38142799291692]
We develop and evaluate a symbolic verification framework using incremental model checking (IMC) and satisfiability modulo theories (SMT)
We can provide guarantees on the safe behavior of ANNs implemented both in floating-point and fixed-point arithmetic.
For small- to medium-sized ANNs, our approach completes most of its verification runs in minutes.
arXiv Detail & Related papers (2021-06-10T18:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.