Towards Auto-Modeling of Formal Verification for NextG Protocols: A
Multimodal cross- and self-attention Large Language Model Approach
- URL: http://arxiv.org/abs/2312.17353v2
- Date: Tue, 2 Jan 2024 00:47:59 GMT
- Title: Towards Auto-Modeling of Formal Verification for NextG Protocols: A
Multimodal cross- and self-attention Large Language Model Approach
- Authors: Jingda Yang and Ying Wang
- Abstract summary: This paper introduces Auto-modeling of Formal Verification with Real-world Prompting for 5G and NextG protocols (AVRE)
AVRE is a novel system designed for the formal verification of Next Generation (NextG) communication protocols.
- Score: 3.9155346446573502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces Auto-modeling of Formal Verification with Real-world
Prompting for 5G and NextG protocols (AVRE), a novel system designed for the
formal verification of Next Generation (NextG) communication protocols,
addressing the increasing complexity and scalability challenges in network
protocol design and verification. Utilizing Large Language Models (LLMs), AVRE
transforms protocol descriptions into dependency graphs and formal models,
efficiently resolving ambiguities and capturing design intent. The system
integrates a transformer model with LLMs to autonomously establish quantifiable
dependency relationships through cross- and self-attention mechanisms. Enhanced
by iterative feedback from the HyFuzz experimental platform, AVRE significantly
advances the accuracy and relevance of formal verification in complex
communication protocols, offering a groundbreaking approach to validating
sophisticated communication systems. We compare CAL's performance with
state-of-the-art LLM-based models and traditional time sequence models,
demonstrating its superiority in accuracy and robustness, achieving an accuracy
of 95.94\% and an AUC of 0.98. This NLP-based approach enables, for the first
time, the creation of exploits directly from design documents, making
remarkable progress in scalable system verification and validation.
Related papers
- Let's Fuse Step by Step: A Generative Fusion Decoding Algorithm with LLMs for Multi-modal Text Recognition [13.759053227199106]
"Generative Fusion Decoding" (GFD) is a novel shallow fusion framework utilized to integrate Large Language Models (LLMs) into multi-modal text recognition systems.
We derive the formulas necessary to enable GFD to operate across mismatched token spaces of different models.
GFD significantly improves performance in ASR and OCR tasks, with ASR reaching state-of-the-art in the NTUML 2021 benchmark.
arXiv Detail & Related papers (2024-05-23T07:39:42Z) - Adaptive Semantic Token Selection for AI-native Goal-oriented Communications [11.92172357956248]
We propose a novel design for AI-native goal-oriented communications.
We exploit transformer neural networks under dynamic inference constraints on bandwidth and computation.
We show that our model improves over state-of-the-art token selection mechanisms.
arXiv Detail & Related papers (2024-04-25T13:49:50Z) - OSM: Leveraging Model Checking for Observing Dynamic 1 behaviors in
Aspect-Oriented Applications [0.0]
observe-based statistical model-checking (OSM) framework devised to craft executable formal models directly from foundational system code.
This ensures the unyielding performance of electronic health record systems amidst shifting preconditions.
arXiv Detail & Related papers (2024-03-03T00:03:34Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - An Efficient Deep Learning Model for Automatic Modulation Recognition
Based on Parameter Estimation and Transformation [3.3941243094128035]
This letter proposes an efficient DL-AMR model based on phase parameter estimation and transformation.
Our model is more competitive in training time and test time than the benchmark models with similar recognition accuracy.
arXiv Detail & Related papers (2021-10-11T03:28:28Z) - Momentum Pseudo-Labeling for Semi-Supervised Speech Recognition [55.362258027878966]
We present momentum pseudo-labeling (MPL) as a simple yet effective strategy for semi-supervised speech recognition.
MPL consists of a pair of online and offline models that interact and learn from each other, inspired by the mean teacher method.
The experimental results demonstrate that MPL effectively improves over the base model and is scalable to different semi-supervised scenarios.
arXiv Detail & Related papers (2021-06-16T16:24:55Z) - Coarse-to-Fine Memory Matching for Joint Retrieval and Classification [0.7081604594416339]
We present a novel end-to-end language model for joint retrieval and classification.
We evaluate it on the standard blind test set of the FEVER fact verification dataset.
We extend exemplar auditing to this setting for analyzing and constraining the model.
arXiv Detail & Related papers (2020-11-29T05:06:03Z) - Automated and Formal Synthesis of Neural Barrier Certificates for
Dynamical Models [70.70479436076238]
We introduce an automated, formal, counterexample-based approach to synthesise Barrier Certificates (BC)
The approach is underpinned by an inductive framework, which manipulates a candidate BC structured as a neural network, and a sound verifier, which either certifies the candidate's validity or generates counter-examples.
The outcomes show that we can synthesise sound BCs up to two orders of magnitude faster, with in particular a stark speedup on the verification engine.
arXiv Detail & Related papers (2020-07-07T07:39:42Z) - Few-shot Natural Language Generation for Task-Oriented Dialog [113.07438787659859]
We present FewShotWoz, the first NLG benchmark to simulate the few-shot learning setting in task-oriented dialog systems.
We develop the SC-GPT model, which is pre-trained on a large set of annotated NLG corpus to acquire the controllable generation ability.
Experiments on FewShotWoz and the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly outperforms existing methods.
arXiv Detail & Related papers (2020-02-27T18:48:33Z) - Joint Contextual Modeling for ASR Correction and Language Understanding [60.230013453699975]
We propose multi-task neural approaches to perform contextual language correction on ASR outputs jointly with language understanding (LU)
We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14% relative with joint models trained using small amounts of in-domain data.
arXiv Detail & Related papers (2020-01-28T22:09:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.