A SCADE Model Verification Method Based on B-Model Transformation
- URL: http://arxiv.org/abs/2505.00967v1
- Date: Fri, 02 May 2025 03:05:09 GMT
- Title: A SCADE Model Verification Method Based on B-Model Transformation
- Authors: Xili Hou, Keming Wang, Huibing Zhao, Ruiyin Shi,
- Abstract summary: This study proposes a formal verification framework based on the B-Method.<n>It successfully verifies abstract specifications that are difficult to model directly in SCADE.<n>This study provides a cross-model verification paradigm for embedded control systems in avionics, rail transportation, and other domains.
- Score: 0.8437187555622164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the limitations of SCADE models in expressing and verifying abstract specifications in safety-critical systems, this study proposes a formal verification framework based on the B-Method. By establishing a semantic equivalence transformation mechanism from SCADE models to B models, a hierarchical mapping rule set is constructed, covering type systems, control flow structures, and state machines. This effectively addresses key technical challenges such as loop-equivalent transformation proof for high-order operators and modeling of temporal logic storage structures. The proposed method innovatively leverages the abstraction capabilities of B-Method in set theory and first-order logic, overcoming the constraints of native verification tools of SCADE in complex specification descriptions. It successfully verifies abstract specifications that are difficult to model directly in SCADE. Experimental results show that the transformed B models achieve a higher defect detection rate and improved verification efficiency in the ProB verification environment compared to the native verifier of SCADE, significantly enhancing the formal verification capability of safety-critical systems. This study provides a cross-model verification paradigm for embedded control systems in avionics, rail transportation, and other domains, demonstrating substantial engineering application value.
Related papers
- CTRLS: Chain-of-Thought Reasoning via Latent State-Transition [57.51370433303236]
Chain-of-thought (CoT) reasoning enables large language models to break down complex problems into interpretable intermediate steps.<n>We introduce groundingS, a framework that formulates CoT reasoning as a Markov decision process (MDP) with latent state transitions.<n>We show improvements in reasoning accuracy, diversity, and exploration efficiency across benchmark reasoning tasks.
arXiv Detail & Related papers (2025-07-10T21:32:18Z) - Lightweight Task-Oriented Semantic Communication Empowered by Large-Scale AI Models [66.57755931421285]
Large-scale artificial intelligence (LAI) models pose significant challenges for real-time communication scenarios.<n>This paper proposes utilizing knowledge distillation (KD) techniques to extract and condense knowledge from LAI models.<n>We propose a fast distillation method featuring a pre-stored compression mechanism that eliminates the need for repetitive inference.
arXiv Detail & Related papers (2025-06-16T08:42:16Z) - Score-based Generative Modeling for Conditional Independence Testing [35.0533359302886]
We propose a novel CI testing method via score-based generative modeling, which achieves precise Type I error control and strong testing power.<n>We theoretically establish the error bound of conditional distributions modeled by score-based generative models and prove the validity of our CI tests.
arXiv Detail & Related papers (2025-05-29T10:10:46Z) - Model Hemorrhage and the Robustness Limits of Large Language Models [119.46442117681147]
Large language models (LLMs) demonstrate strong performance across natural language processing tasks, yet undergo significant performance degradation when modified for deployment.<n>We define this phenomenon as model hemorrhage - performance decline caused by parameter alterations and architectural changes.
arXiv Detail & Related papers (2025-03-31T10:16:03Z) - Scoring Verifiers: Evaluating Synthetic Verification for Code and Reasoning [59.25951947621526]
We propose an approach which can transform existing coding benchmarks into scoring and ranking datasets to evaluate the effectiveness of synthetic verifiers.<n>We release four new benchmarks (HE-R, HE-R+, MBPP-R, and MBPP-R+), and analyzed synthetic verification methods with standard, reasoning-based, and reward-based LLMs.<n>Our experiments show that reasoning can significantly improve test case generation and that scaling the number of test cases enhances the verification accuracy.
arXiv Detail & Related papers (2025-02-19T15:32:11Z) - SMRS: advocating a unified reporting standard for surrogate models in the artificial intelligence era [1.4835379864550937]
We argue for the urgent need to establish a structured reporting standard for surrogate models.<n>By promoting a standardised yet flexible framework, we aim to improve the reliability of surrogate modelling.
arXiv Detail & Related papers (2025-02-10T18:31:15Z) - A Framework for Strategic Discovery of Credible Neural Network Surrogate Models under Uncertainty [0.0]
This study presents the Occam Plausibility Algorithm for surrogate models (OPAL-surrogate)
OPAL-surrogate provides a systematic framework to uncover predictive neural network-based surrogate models.
It balances the trade-off between model complexity, accuracy, and prediction uncertainty.
arXiv Detail & Related papers (2024-03-13T18:45:51Z) - OSM: Leveraging Model Checking for Observing Dynamic 1 behaviors in
Aspect-Oriented Applications [0.0]
observe-based statistical model-checking (OSM) framework devised to craft executable formal models directly from foundational system code.
This ensures the unyielding performance of electronic health record systems amidst shifting preconditions.
arXiv Detail & Related papers (2024-03-03T00:03:34Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Lifted Model Checking for Relational MDPs [12.574454799055026]
pCTL-REBEL is a lifted model checking approach for verifying pCTL properties on relational MDPs.
We show that the pCTL model checking approach is decidable for relational MDPs even for possibly infinite domains.
arXiv Detail & Related papers (2021-06-22T13:12:36Z) - Model-Based Counterfactual Synthesizer for Interpretation [40.01787107375103]
We propose a Model-based Counterfactual Synthesizer (MCS) framework for interpreting machine learning models.
We first analyze the model-based counterfactual process and construct a base synthesizer using a conditional generative adversarial net (CGAN)
To better approximate the counterfactual universe for those rare queries, we novelly employ the umbrella sampling technique to conduct the MCS framework training.
arXiv Detail & Related papers (2021-06-16T17:09:57Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.