A Neuro-Symbolic Framework for Reasoning under Perceptual Uncertainty: Bridging Continuous Perception and Discrete Symbolic Planning
- URL: http://arxiv.org/abs/2511.14533v1
- Date: Tue, 18 Nov 2025 14:38:01 GMT
- Title: A Neuro-Symbolic Framework for Reasoning under Perceptual Uncertainty: Bridging Continuous Perception and Discrete Symbolic Planning
- Authors: Jiahao Wu, Shengwen Yu,
- Abstract summary: We present a neuro-symbolic framework that explicitly models and propagates uncertainty from perception to planning.<n>We demonstrate the framework's effectiveness on tabletop robotic manipulation as a concrete application.
- Score: 1.9236465591431287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bridging continuous perceptual signals and discrete symbolic reasoning is a fundamental challenge in AI systems that must operate under uncertainty. We present a neuro-symbolic framework that explicitly models and propagates uncertainty from perception to planning, providing a principled connection between these two abstraction levels. Our approach couples a transformer-based perceptual front-end with graph neural network (GNN) relational reasoning to extract probabilistic symbolic states from visual observations, and an uncertainty-aware symbolic planner that actively gathers information when confidence is low. We demonstrate the framework's effectiveness on tabletop robotic manipulation as a concrete application: the translator processes 10,047 PyBullet-generated scenes (3--10 objects) and outputs probabilistic predicates with calibrated confidences (overall F1=0.68). When embedded in the planner, the system achieves 94\%/90\%/88\% success on Simple Stack, Deep Stack, and Clear+Stack benchmarks (90.7\% average), exceeding the strongest POMDP baseline by 10--14 points while planning within 15\,ms. A probabilistic graphical-model analysis establishes a quantitative link between calibrated uncertainty and planning convergence, providing theoretical guarantees that are validated empirically. The framework is general-purpose and can be applied to any domain requiring uncertainty-aware reasoning from perceptual input to symbolic planning.
Related papers
- Foundation Models for Logistics: Toward Certifiable, Conversational Planning Interfaces [59.80143393787701]
Large language models (LLMs) can handle uncertainty and promise to accelerate replanning while lowering the barrier to entry.<n>We introduce a neurosymbolic framework that pairs the accessibility of natural-language dialogue with verifiable guarantees on goal interpretation.<n>A lightweight model, fine-tuned on just 100 uncertainty-filtered examples, surpasses the zero-shot performance of GPT-4.1 while cutting inference latency by nearly 50%.
arXiv Detail & Related papers (2025-07-15T14:24:01Z) - Discrete JEPA: Learning Discrete Token Representations without Reconstruction [23.6286989806018]
Symbolic cornerstone of cognitive intelligence lies in extracting hidden patterns from observations.<n>We propose Discrete-JEPA, extending latent predictive coding framework with semantic tokenization.<n>Our approach promises a significant impact for advancing world modeling and planning capabilities in artificial intelligence systems.
arXiv Detail & Related papers (2025-06-17T10:15:17Z) - A Scalable Approach to Probabilistic Neuro-Symbolic Robustness Verification [14.558484523699748]
We address the problem of formally verifying the robustness of NeSy probabilistic reasoning systems.<n>We show that a decision version of the core computation is $mathrmNPmathrmPP$-complete.<n>We propose the first approach for approximate, relaxation-based verification of probabilistic NeSy systems.
arXiv Detail & Related papers (2025-02-05T15:29:41Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.<n>We propose methods tailored to the unique properties of perception and decision-making.<n>We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Interpretable Concept-Based Memory Reasoning [12.562474638728194]
Concept-based Memory Reasoner (CMR) is a novel CBM designed to provide a human-understandable and provably-verifiable task prediction process.
CMR achieves better accuracy-interpretability trade-offs to state-of-the-art CBMs, discovers logic rules consistent with ground truths, allows for rule interventions, and allows pre-deployment verification.
arXiv Detail & Related papers (2024-07-22T10:32:48Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - Paraconsistent Foundations for Probabilistic Reasoning, Programming and
Concept Formation [0.0]
It is argued that 4-valued paraconsistent truth values (called here "p-bits") can serve as a conceptual, mathematical and practical foundation for highly AI-relevant forms of probabilistic logic and programming and concept formation.
It is shown that appropriate averaging-across-situations and renormalization of 4-valued p-bits operating in accordance with Constructible Duality (CD) logic yields PLN (Probabilistic Logic Networks) strength-and-confidence truth values.
arXiv Detail & Related papers (2020-12-28T20:14:49Z) - Symbolic Learning and Reasoning with Noisy Data for Probabilistic
Anchoring [19.771392829416992]
We propose a semantic world modeling approach based on bottom-up object anchoring.
We extend the definitions of anchoring to handle multi-modal probability distributions.
We use statistical relational learning to enable the anchoring framework to learn symbolic knowledge.
arXiv Detail & Related papers (2020-02-24T16:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.