Agentic System with Modal Logic for Autonomous Diagnostics
- URL: http://arxiv.org/abs/2509.11943v3
- Date: Sat, 18 Oct 2025 00:10:05 GMT
- Title: Agentic System with Modal Logic for Autonomous Diagnostics
- Authors: Antonin Sulc, Thorsten Hellert,
- Abstract summary: We argue that scaling the structure, fidelity, and logical consistency of agent reasoning is a crucial, yet underexplored, dimension of AI research.<n>This paper introduces a neuro-symbolic multi-agent architecture where the belief states of individual agents are formally represented as Kripke models.<n>In this work, we use immutable, domain-specific knowledge to make an informed root cause diagnosis, which is encoded as logical constraints essential for proper, reliable, and explainable diagnosis.
- Score: 0.3437656066916039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of intelligent agents, particularly those powered by language models (LMs), has shown a critical role in various environments that require intelligent and autonomous decision-making. Environments are not passive testing grounds, and they represent the data required for agents to learn and exhibit in very challenging conditions that require adaptive, complex, and autonomous capacity to make decisions. While the paradigm of scaling models and datasets has led to remarkable emergent capabilities, we argue that scaling the structure, fidelity, and logical consistency of agent reasoning within these environments is a crucial, yet underexplored, dimension of AI research. This paper introduces a neuro-symbolic multi-agent architecture where the belief states of individual agents are formally represented as Kripke models. This foundational choice enables them to reason about known concepts of \emph{possibility} and \emph{necessity} using the formal language of modal logic. In this work, we use immutable, domain-specific knowledge to make an informed root cause diagnosis, which is encoded as logical constraints essential for proper, reliable, and explainable diagnosis. In the proposed model, we show constraints that actively guide the hypothesis generation of LMs, effectively preventing them from reaching physically or logically untenable conclusions. In a high-fidelity simulated particle accelerator environment, our system successfully diagnoses complex, cascading failures by combining the powerful semantic intuition of LMs with the rigorous, verifiable validation of modal logic and a factual world model and showcasing a viable path toward more robust, reliable, and verifiable autonomous agents.
Related papers
- Differentiable Modal Logic for Multi-Agent Diagnosis, Orchestration and Communication [0.15229257192293197]
This tutorial demonstrates differentiable modal logic (DML), implemented via Modal Logical Neural Networks (MLNNs)<n>We present a unified neurosymbolic debug framework through four modalities: epistemic (who to trust), temporal (when events cause failures), deontic (what actions are permitted) and doxastic (how to interpret agent confidence)<n>Key contributions for the neurosymbolic community: (1) interpretable learned structures where trust and causality are explicit parameters, not opaque embeddings; (2) knowledge injection via differentiable axioms that guide learning with sparse data; and (4) practical deployment patterns for monitoring, active control and communication of
arXiv Detail & Related papers (2026-02-12T15:39:18Z) - From Passive Metric to Active Signal: The Evolving Role of Uncertainty Quantification in Large Language Models [77.04403907729738]
This survey charts the evolution of uncertainty from a passive diagnostic metric to an active control signal guiding real-time model behavior.<n>We demonstrate how uncertainty is leveraged as an active control signal across three frontiers.<n>This survey argues that mastering the new trend of uncertainty is essential for building the next generation of scalable, reliable, and trustworthy AI.
arXiv Detail & Related papers (2026-01-22T06:21:31Z) - Agentic Reasoning for Large Language Models [122.81018455095999]
Reasoning is a fundamental cognitive process underlying inference, problem-solving, and decision-making.<n>Large language models (LLMs) demonstrate strong reasoning capabilities in closed-world settings, but struggle in open-ended and dynamic environments.<n>Agentic reasoning marks a paradigm shift by reframing LLMs as autonomous agents that plan, act, and learn through continual interaction.
arXiv Detail & Related papers (2026-01-18T18:58:23Z) - Real-Time Reasoning Agents in Evolving Environments [52.21796134114843]
We introduce real-time reasoning as a new problem formulation for agents in evolving environments.<n>Our work establishes real-time reasoning as a critical testbed for developing practical agents.
arXiv Detail & Related papers (2025-11-07T00:51:02Z) - ATA: A Neuro-Symbolic Approach to Implement Autonomous and Trustworthy Agents [0.9740025522928777]
Large Language Models (LLMs) have demonstrated impressive capabilities, yet their deployment in high-stakes domains is hindered by inherent limitations in trustworthiness.<n>We introduce a generic neuro-symbolic approach, which we call Autonomous Trustworthy Agents (ATA)
arXiv Detail & Related papers (2025-10-18T07:35:54Z) - Flexible Swarm Learning May Outpace Foundation Models in Essential Tasks [0.0]
Foundation models have rapidly advanced AI, raising the question of whether their decisions will surpass human strategies in real-world domains.<n>Common challenge is adapting complex systems to dynamic environments.<n>We argue that monolithic foundation models face conceptual limits in overcoming it.<n>We propose a decentralized architecture of interacting small agent networks (SANs)
arXiv Detail & Related papers (2025-10-07T18:10:31Z) - How Good are Foundation Models in Step-by-Step Embodied Reasoning? [79.15268080287505]
Embodied agents must make decisions that are safe, spatially coherent, and grounded in context.<n>Recent advances in large multimodal models have shown promising capabilities in visual understanding and language generation.<n>Our benchmark includes over 1.1k samples with detailed step-by-step reasoning across 10 tasks and 8 embodiments.
arXiv Detail & Related papers (2025-09-18T17:56:30Z) - A Survey of Self-Evolving Agents: On Path to Artificial Super Intelligence [87.08051686357206]
Large Language Models (LLMs) have demonstrated strong capabilities but remain fundamentally static.<n>As LLMs are increasingly deployed in open-ended, interactive environments, this static nature has become a critical bottleneck.<n>This survey provides the first systematic and comprehensive review of self-evolving agents.
arXiv Detail & Related papers (2025-07-28T17:59:05Z) - The Constitutional Controller: Doubt-Calibrated Steering of Compliant Agents [18.680037980430797]
We show how neuro-symbolic systems integrate probabilistic, symbolic white-box reasoning models with deep learning methods.<n>This enables the simultaneous consideration of explicit rules and neural models trained on noisy data.<n>In a real-world aerial mobility study, we demonstrate CoCo's advantages for intelligent autonomous systems to learn appropriate doubts.
arXiv Detail & Related papers (2025-07-21T10:33:31Z) - Nature's Insight: A Novel Framework and Comprehensive Analysis of Agentic Reasoning Through the Lens of Neuroscience [11.174550573411008]
We propose a novel neuroscience-inspired framework for agentic reasoning.<n>We apply this framework to systematically classify and analyze existing AI reasoning methods.<n>We propose new neural-inspired reasoning methods, analogous to chain-of-thought prompting.
arXiv Detail & Related papers (2025-05-07T14:25:46Z) - Computational Irreducibility as the Foundation of Agency: A Formal Model Connecting Undecidability to Autonomous Behavior in Complex Systems [0.0]
we establish precise mathematical connections, proving that for any truly autonomous system, questions about its future behavior are fundamentally undecidable.<n>The findings have significant implications for artificial intelligence, biological modeling, and philosophical concepts like free will.
arXiv Detail & Related papers (2025-05-05T21:24:50Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [74.0242521818214]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.<n>We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.<n>We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Relational Neurosymbolic Markov Models [13.22004615196798]
Sequential problems are ubiquitous in AI, such as in reinforcement learning or natural language processing.<n>We introduce neurosymbolic AI (NeSy) which provides a sound formalism to enforce constraints in deep probabilistic models but scales exponentially on sequential problems.<n>We propose a strategy for inference and learning that scales on sequential settings, and that combines approximate Bayesian inference, automated reasoning, and gradient estimation.
arXiv Detail & Related papers (2024-12-17T15:41:51Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.