Assured Autonomy with Neuro-Symbolic Perception
- URL: http://arxiv.org/abs/2505.21322v1
- Date: Tue, 27 May 2025 15:21:06 GMT
- Title: Assured Autonomy with Neuro-Symbolic Perception
- Authors: R. Spencer Hallyburton, Miroslav Pajic,
- Abstract summary: Many state-of-the-art AI models deployed in cyber-physical systems (CPS) are pattern-matchers.<n>With limited security guarantees, there are concerns for their reliability in safety-critical and contested domains.<n>We propose a paradigm shift that imbues data-driven perception models with symbolic structure.
- Score: 11.246557832016238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many state-of-the-art AI models deployed in cyber-physical systems (CPS), while highly accurate, are simply pattern-matchers.~With limited security guarantees, there are concerns for their reliability in safety-critical and contested domains. To advance assured AI, we advocate for a paradigm shift that imbues data-driven perception models with symbolic structure, inspired by a human's ability to reason over low-level features and high-level context. We propose a neuro-symbolic paradigm for perception (NeuSPaPer) and illustrate how joint object detection and scene graph generation (SGG) yields deep scene understanding.~Powered by foundation models for offline knowledge extraction and specialized SGG algorithms for real-time deployment, we design a framework leveraging structured relational graphs that ensures the integrity of situational awareness in autonomy. Using physics-based simulators and real-world datasets, we demonstrate how SGG bridges the gap between low-level sensor perception and high-level reasoning, establishing a foundation for resilient, context-aware AI and advancing trusted autonomy in CPS.
Related papers
- Video Event Reasoning and Prediction by Fusing World Knowledge from LLMs with Vision Foundation Models [10.1080193179562]
Current understanding models excel at recognizing "what" but fall short in high-level cognitive tasks like causal reasoning and future prediction.<n>We propose a novel framework that fuses a powerful Vision Foundation Model for deep visual perception with a Large Language Model (LLM) serving as a knowledge-driven reasoning core.
arXiv Detail & Related papers (2025-07-08T09:43:17Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [115.71019708491386]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - World Models for Cognitive Agents: Transforming Edge Intelligence in Future Networks [55.90051810762702]
We present a comprehensive overview of world models, highlighting their architecture, training paradigms, and applications across prediction, generation, planning, and causal reasoning.<n>We propose Wireless Dreamer, a novel world model-based reinforcement learning framework tailored for wireless edge intelligence optimization.
arXiv Detail & Related papers (2025-05-31T06:43:00Z) - Unlocking the Potential of Generative AI through Neuro-Symbolic Architectures: Benefits and Limitations [0.7499722271664147]
Neuro-symbolic artificial intelligence (NSAI) represents a transformative approach in artificial intelligence (AI)<n>NSAI combines deep learning's ability to handle large-scale and unstructured data with the structured reasoning of symbolic methods.<n>This paper systematically studies NSAI architectures, highlighting their unique approaches to integrating neural and symbolic components.
arXiv Detail & Related papers (2025-02-16T21:06:33Z) - Mechanistic understanding and validation of large AI models with SemanticLens [13.712668314238082]
Unlike human-engineered systems such as aeroplanes, the inner workings of AI models remain largely opaque.<n>This paper introduces SemanticLens, a universal explanation method for neural networks that maps hidden knowledge encoded by components.
arXiv Detail & Related papers (2025-01-09T17:47:34Z) - Graph-Based Multi-Modal Sensor Fusion for Autonomous Driving [3.770103075126785]
We introduce a novel approach to multi-modal sensor fusion, focusing on developing a graph-based state representation.
We present a Sensor-Agnostic Graph-Aware Kalman Filter, the first online state estimation technique designed to fuse multi-modal graphs.
We validate the effectiveness of our proposed framework through extensive experiments conducted on both synthetic and real-world driving datasets.
arXiv Detail & Related papers (2024-11-06T06:58:17Z) - QIXAI: A Quantum-Inspired Framework for Enhancing Classical and Quantum Model Transparency and Understanding [0.0]
Deep learning models are often hindered by their lack of interpretability, rendering them "black boxes"
This paper introduces the QIXAI Framework, a novel approach for enhancing neural network interpretability through quantum-inspired techniques.
The framework applies to both quantum and classical systems, demonstrating its potential to improve interpretability and transparency across a range of models.
arXiv Detail & Related papers (2024-10-21T21:55:09Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Explainable Spatio-Temporal Graph Neural Networks [16.313146933922752]
We propose an Explainable Spatio-Temporal Graph Neural Networks (STGNN) framework that enhances STGNNs with inherent explainability.
Our framework integrates a unified-temporal graph attention network with a positional information fusion layer as the STG encoder and decoder.
We demonstrate that STExplainer outperforms state-of-the-art baselines in terms of predictive accuracy and explainability metrics.
arXiv Detail & Related papers (2023-10-26T04:47:28Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.