A Scalable Approach to Probabilistic Neuro-Symbolic Robustness Verification
- URL: http://arxiv.org/abs/2502.03274v2
- Date: Tue, 29 Jul 2025 08:42:21 GMT
- Title: A Scalable Approach to Probabilistic Neuro-Symbolic Robustness Verification
- Authors: Vasileios Manginas, Nikolaos Manginas, Edward Stevinson, Sherwin Varghese, Nikos Katzouris, Georgios Paliouras, Alessio Lomuscio,
- Abstract summary: We address the problem of formally verifying the robustness of NeSy probabilistic reasoning systems.<n>We show that a decision version of the core computation is $mathrmNPmathrmPP$-complete.<n>We propose the first approach for approximate, relaxation-based verification of probabilistic NeSy systems.
- Score: 14.558484523699748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuro-Symbolic Artificial Intelligence (NeSy AI) has emerged as a promising direction for integrating neural learning with symbolic reasoning. Typically, in the probabilistic variant of such systems, a neural network first extracts a set of symbols from sub-symbolic input, which are then used by a symbolic component to reason in a probabilistic manner towards answering a query. In this work, we address the problem of formally verifying the robustness of such NeSy probabilistic reasoning systems, therefore paving the way for their safe deployment in critical domains. We analyze the complexity of solving this problem exactly, and show that a decision version of the core computation is $\mathrm{NP}^{\mathrm{PP}}$-complete. In the face of this result, we propose the first approach for approximate, relaxation-based verification of probabilistic NeSy systems. We demonstrate experimentally on a standard NeSy benchmark that the proposed method scales exponentially better than solver-based solutions and apply our technique to a real-world autonomous driving domain, where we verify a safety property under large input dimensionalities.
Related papers
- Probabilistic Bisimulation for Parameterized Anonymity and Uniformity Verification [5.806034991979994]
Bisimulation is crucial for verifying process equivalence in probabilistic systems.<n>This paper presents a novel framework for analyzing bisimulation in infinite families of finite-state probabilistic systems.<n>We show that essential properties like anonymity and uniformity can be encoded and verified within this framework.
arXiv Detail & Related papers (2025-05-15T04:56:53Z) - On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms [9.071347361654931]
We assess the assurance of end-to-end fully differentiable neurosymbolic systems that are an emerging method to create data-efficient models.
We find end-to-end neurosymbolic methods present unique opportunities for assurance beyond their data efficiency.
arXiv Detail & Related papers (2025-02-13T03:29:42Z) - Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.<n>We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.<n>We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - NeSyA: Neurosymbolic Automata [8.461323070662774]
Neurosymbolic (NeSy) AI has emerged as a promising direction to integrate neural and symbolic reasoning.<n>We show that symbolic automata can be integrated with neural-based perception.<n>Our proposed hybrid model, NeSyA (Neuro Automata) is shown to either scale or perform more accurately than previous NeSy systems.
arXiv Detail & Related papers (2024-12-10T09:23:36Z) - A Complexity Map of Probabilistic Reasoning for Neurosymbolic Classification Techniques [6.775534755081169]
We develop a unified formalism for four probabilistic reasoning problems.<n>Then, we compile several known and new tractability results into a single complexity map of probabilistic reasoning.<n>We build on this complexity map to characterize the domains of scalability of several techniques.
arXiv Detail & Related papers (2024-04-12T11:31:37Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic
Inference [11.393328084369783]
Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference.
We introduce Approximate Neurosymbolic Inference (A-NeSI), a new framework for PNL that uses scalable neural networks for approximate inference.
arXiv Detail & Related papers (2022-12-23T15:24:53Z) - Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces [20.260546238369205]
We propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge.
We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge.
We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance.
arXiv Detail & Related papers (2022-09-19T04:03:20Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - A Simple and Efficient Sampling-based Algorithm for General Reachability
Analysis [32.488975902387395]
General-purpose reachability analysis remains a notoriously challenging problem with applications ranging from neural network verification to safety analysis of dynamical systems.
By sampling inputs, evaluating their images in the true reachable set, and taking their $epsilon$-padded convex hull as a set estimator, this algorithm applies to general problem settings and is simple to implement.
This analysis informs algorithmic design to obtain an $epsilon$-close reachable set approximation with high probability.
On a neural network verification task, we show that this approach is more accurate and significantly faster than prior work.
arXiv Detail & Related papers (2021-12-10T18:56:16Z) - General stochastic separation theorems with optimal bounds [68.8204255655161]
Phenomenon of separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities.
Errors or clusters of errors can be separated from the rest of the data.
The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same separability.
arXiv Detail & Related papers (2020-10-11T13:12:41Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.