Towards Auto-Modeling of Formal Verification for NextG Protocols: A
Multimodal cross- and self-attention Large Language Model Approach
- URL: http://arxiv.org/abs/2312.17353v2
- Date: Tue, 2 Jan 2024 00:47:59 GMT
- Title: Towards Auto-Modeling of Formal Verification for NextG Protocols: A
Multimodal cross- and self-attention Large Language Model Approach
- Authors: Jingda Yang and Ying Wang
- Abstract summary: This paper introduces Auto-modeling of Formal Verification with Real-world Prompting for 5G and NextG protocols (AVRE)
AVRE is a novel system designed for the formal verification of Next Generation (NextG) communication protocols.
- Score: 3.9155346446573502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces Auto-modeling of Formal Verification with Real-world
Prompting for 5G and NextG protocols (AVRE), a novel system designed for the
formal verification of Next Generation (NextG) communication protocols,
addressing the increasing complexity and scalability challenges in network
protocol design and verification. Utilizing Large Language Models (LLMs), AVRE
transforms protocol descriptions into dependency graphs and formal models,
efficiently resolving ambiguities and capturing design intent. The system
integrates a transformer model with LLMs to autonomously establish quantifiable
dependency relationships through cross- and self-attention mechanisms. Enhanced
by iterative feedback from the HyFuzz experimental platform, AVRE significantly
advances the accuracy and relevance of formal verification in complex
communication protocols, offering a groundbreaking approach to validating
sophisticated communication systems. We compare CAL's performance with
state-of-the-art LLM-based models and traditional time sequence models,
demonstrating its superiority in accuracy and robustness, achieving an accuracy
of 95.94\% and an AUC of 0.98. This NLP-based approach enables, for the first
time, the creation of exploits directly from design documents, making
remarkable progress in scalable system verification and validation.
Related papers
- Matchmaker: Self-Improving Large Language Model Programs for Schema Matching [60.23571456538149]
We propose a compositional language model program for schema matching, comprised of candidate generation, refinement and confidence scoring.
Matchmaker self-improves in a zero-shot manner without the need for labeled demonstrations.
Empirically, we demonstrate on real-world medical schema matching benchmarks that Matchmaker outperforms previous ML-based approaches.
arXiv Detail & Related papers (2024-10-31T16:34:03Z) - Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - Let's Fuse Step by Step: A Generative Fusion Decoding Algorithm with LLMs for Multi-modal Text Recognition [13.759053227199106]
"Generative Fusion Decoding" (GFD) is a novel shallow fusion framework utilized to integrate Large Language Models (LLMs) into multi-modal text recognition systems.
We derive the formulas necessary to enable GFD to operate across mismatched token spaces of different models.
GFD significantly improves performance in ASR and OCR tasks, with ASR reaching state-of-the-art in the NTUML 2021 benchmark.
arXiv Detail & Related papers (2024-05-23T07:39:42Z) - Adaptive Semantic Token Selection for AI-native Goal-oriented Communications [11.92172357956248]
We propose a novel design for AI-native goal-oriented communications.
We exploit transformer neural networks under dynamic inference constraints on bandwidth and computation.
We show that our model improves over state-of-the-art token selection mechanisms.
arXiv Detail & Related papers (2024-04-25T13:49:50Z) - OSM: Leveraging Model Checking for Observing Dynamic 1 behaviors in
Aspect-Oriented Applications [0.0]
observe-based statistical model-checking (OSM) framework devised to craft executable formal models directly from foundational system code.
This ensures the unyielding performance of electronic health record systems amidst shifting preconditions.
arXiv Detail & Related papers (2024-03-03T00:03:34Z) - FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR Prediction [49.510163437116645]
Click-through rate (CTR) prediction plays as a core function module in personalized online services.
Traditional ID-based models for CTR prediction take as inputs the one-hot encoded ID features of tabular modality.
Pretrained Language Models(PLMs) has given rise to another paradigm, which takes as inputs the sentences of textual modality.
We propose to conduct Fine-grained feature-level ALignment between ID-based Models and Pretrained Language Models(FLIP) for CTR prediction.
arXiv Detail & Related papers (2023-10-30T11:25:03Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - An Efficient Deep Learning Model for Automatic Modulation Recognition
Based on Parameter Estimation and Transformation [3.3941243094128035]
This letter proposes an efficient DL-AMR model based on phase parameter estimation and transformation.
Our model is more competitive in training time and test time than the benchmark models with similar recognition accuracy.
arXiv Detail & Related papers (2021-10-11T03:28:28Z) - Coarse-to-Fine Memory Matching for Joint Retrieval and Classification [0.7081604594416339]
We present a novel end-to-end language model for joint retrieval and classification.
We evaluate it on the standard blind test set of the FEVER fact verification dataset.
We extend exemplar auditing to this setting for analyzing and constraining the model.
arXiv Detail & Related papers (2020-11-29T05:06:03Z) - Automated and Formal Synthesis of Neural Barrier Certificates for
Dynamical Models [70.70479436076238]
We introduce an automated, formal, counterexample-based approach to synthesise Barrier Certificates (BC)
The approach is underpinned by an inductive framework, which manipulates a candidate BC structured as a neural network, and a sound verifier, which either certifies the candidate's validity or generates counter-examples.
The outcomes show that we can synthesise sound BCs up to two orders of magnitude faster, with in particular a stark speedup on the verification engine.
arXiv Detail & Related papers (2020-07-07T07:39:42Z) - Joint Contextual Modeling for ASR Correction and Language Understanding [60.230013453699975]
We propose multi-task neural approaches to perform contextual language correction on ASR outputs jointly with language understanding (LU)
We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14% relative with joint models trained using small amounts of in-domain data.
arXiv Detail & Related papers (2020-01-28T22:09:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.