NSP: A Neuro-Symbolic Natural Language Navigational Planner
- URL: http://arxiv.org/abs/2409.06859v2
- Date: Fri, 13 Sep 2024 22:13:01 GMT
- Title: NSP: A Neuro-Symbolic Natural Language Navigational Planner
- Authors: William English, Dominic Simon, Sumit Jha, Rickard Ewetz,
- Abstract summary: We propose a neuro-symbolic framework for path planning from natural language inputs called NSP.
We evaluate our neuro-symbolic approach using a benchmark suite with 1500 path-planning problems.
- Score: 6.841829967944275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Path planners that can interpret free-form natural language instructions hold promise to automate a wide range of robotics applications. These planners simplify user interactions and enable intuitive control over complex semi-autonomous systems. While existing symbolic approaches offer guarantees on the correctness and efficiency, they struggle to parse free-form natural language inputs. Conversely, neural approaches based on pre-trained Large Language Models (LLMs) can manage natural language inputs but lack performance guarantees. In this paper, we propose a neuro-symbolic framework for path planning from natural language inputs called NSP. The framework leverages the neural reasoning abilities of LLMs to i) craft symbolic representations of the environment and ii) a symbolic path planning algorithm. Next, a solution to the path planning problem is obtained by executing the algorithm on the environment representation. The framework uses a feedback loop from the symbolic execution environment to the neural generation process to self-correct syntax errors and satisfy execution time constraints. We evaluate our neuro-symbolic approach using a benchmark suite with 1500 path-planning problems. The experimental evaluation shows that our neuro-symbolic approach produces 90.1% valid paths that are on average 19-77% shorter than state-of-the-art neural approaches.
Related papers
- Language Model Circuits Are Sparse in the Neuron Basis [50.460651620833055]
We show that textbfMLP neurons are as sparse a feature basis as SAEs.<n>This work advances automated interpretability of language models without additional training costs.
arXiv Detail & Related papers (2026-01-30T05:41:19Z) - NeSyPr: Neurosymbolic Proceduralization For Efficient Embodied Reasoning [21.685443540926652]
NeSyPr is a novel embodied reasoning framework that compiles knowledge via neurosymbolic proceduralization.<n>It supports efficient test-time inference without relying on external symbolic guidance.<n>We evaluate NeSyPr on the embodied benchmarks PDDLGym, VirtualHome, and ALFWorld.
arXiv Detail & Related papers (2025-10-22T09:57:02Z) - LOOP: A Plug-and-Play Neuro-Symbolic Framework for Enhancing Planning in Autonomous Systems [0.0]
Planning is one of the most critical tasks in autonomous systems, where even a small error can lead to major failures or million-dollar losses.<n>Current state-of-the-art neural planning approaches struggle with complex domains.<n>LOOP is a novel neuro-symbolic planning framework that treats planning as an iterative conversation between neural and symbolic components.
arXiv Detail & Related papers (2025-08-18T21:21:21Z) - Principled Approaches for Extending Neural Architectures to Function Spaces for Operator Learning [78.88684753303794]
Deep learning has predominantly advanced through applications in computer vision and natural language processing.<n>Neural operators are a principled way to generalize neural networks to mappings between function spaces.<n>This paper identifies and distills the key principles for constructing practical implementations of mappings between infinite-dimensional function spaces.
arXiv Detail & Related papers (2025-06-12T17:59:31Z) - Noise to the Rescue: Escaping Local Minima in Neurosymbolic Local Search [50.24983453990065]
We show that applying BP to Godel logic, which represents conjunction and disjunction as min and max, is equivalent to a local search algorithm for SAT solving.
We propose the Godel Trick, which adds noise to the model's logits to escape local optima.
arXiv Detail & Related papers (2025-03-03T18:42:13Z) - Differentiable Logic Programming for Distant Supervision [4.820391833117535]
We introduce a new method for integrating neural networks with logic programming in Neural-Symbolic AI (NeSy)
Unlike prior methods, our approach does not depend on symbolic solvers for reasoning about missing labels.
This method facilitates more efficient learning under distant supervision.
arXiv Detail & Related papers (2024-08-22T17:55:52Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Injecting Logical Constraints into Neural Networks via Straight-Through
Estimators [5.6613898352023515]
Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI.
We find that a straight-through-estimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning.
arXiv Detail & Related papers (2023-07-10T05:12:05Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead
Heuristics [73.96837492216204]
We propose NeuroLogic A*esque, a decoding algorithm that incorporates estimates of future cost.
We develop efficient lookaheads that are efficient for large-scale language models.
Our approach achieves competitive baselines on five generation tasks, and new state-of-the-art performance on table-to-text generation, constrained machine translation, and keyword-constrained generation.
arXiv Detail & Related papers (2021-12-16T09:22:54Z) - SLASH: Embracing Probabilistic Circuits into Neural Answer Set
Programming [15.814914345000574]
We introduce SLASH -- a novel deep probabilistic programming language (DPPL)
At its core, SLASH consists of Neural-Probabilistic Predicates (NPPs) and logical programs which are united via answer set programming.
We evaluate SLASH on the benchmark data of MNIST addition as well as novel tasks for DPPLs such as missing data prediction and set prediction with state-of-the-art performance.
arXiv Detail & Related papers (2021-10-07T12:35:55Z) - NeuralLog: Natural Language Inference with Joint Neural and Logical
Reasoning [6.795509403707242]
We propose an inference framework called NeuralLog, which utilizes both a monotonicity-based logical inference engine and a neural network language model for phrase alignment.
Our framework models the NLI task as a classic search problem and uses the beam search algorithm to search for optimal inference paths.
Experiments show that our joint logic and neural inference system improves accuracy on the NLI task and can achieve state-of-art accuracy on the SICK and MED datasets.
arXiv Detail & Related papers (2021-05-29T01:02:40Z) - Learning Neuro-Symbolic Relational Transition Models for Bilevel
Planning [61.37385221479233]
In this work, we take a step toward bridging the gap between model-based reinforcement learning and integrated symbolic-geometric robotic planning.
NSRTs have both symbolic and neural components, enabling a bilevel planning scheme where symbolic AI planning in an outer loop guides continuous planning with neural models in an inner loop.
NSRTs can be learned after only tens or hundreds of training episodes, and then used for fast planning in new tasks that require up to 60 actions to reach the goal.
arXiv Detail & Related papers (2021-05-28T19:37:18Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.