Towards a Causal Probabilistic Framework for Prediction,
Action-Selection & Explanations for Robot Block-Stacking Tasks
- URL: http://arxiv.org/abs/2308.06203v2
- Date: Fri, 29 Sep 2023 00:19:11 GMT
- Title: Towards a Causal Probabilistic Framework for Prediction,
Action-Selection & Explanations for Robot Block-Stacking Tasks
- Authors: Ricardo Cannizzaro, Jonathan Routley, and Lars Kunze
- Abstract summary: Causal models provide a principled framework to encode formal knowledge of the causal relationships that govern the robot's interaction with its environment.
We propose a novel causal probabilistic framework to embed a physics simulation capability into a structural causal model to permit robots to perceive and assess the current state of a block-stacking task.
- Score: 4.244706520140677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainties in the real world mean that is impossible for system designers
to anticipate and explicitly design for all scenarios that a robot might
encounter. Thus, robots designed like this are fragile and fail outside of
highly-controlled environments. Causal models provide a principled framework to
encode formal knowledge of the causal relationships that govern the robot's
interaction with its environment, in addition to probabilistic representations
of noise and uncertainty typically encountered by real-world robots. Combined
with causal inference, these models permit an autonomous agent to understand,
reason about, and explain its environment. In this work, we focus on the
problem of a robot block-stacking task due to the fundamental perception and
manipulation capabilities it demonstrates, required by many applications
including warehouse logistics and domestic human support robotics. We propose a
novel causal probabilistic framework to embed a physics simulation capability
into a structural causal model to permit robots to perceive and assess the
current state of a block-stacking task, reason about the next-best action from
placement candidates, and generate post-hoc counterfactual explanations. We
provide exemplar next-best action selection results and outline planned
experimentation in simulated and real-world robot block-stacking tasks.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Towards Probabilistic Planning of Explanations for Robot Navigation [2.6196780831364643]
This paper introduces a novel approach that integrates user-centered design principles directly into the core of robot path planning processes.
We propose a probabilistic framework for automated planning of explanations for robot navigation.
arXiv Detail & Related papers (2024-10-26T09:52:14Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - A Causal Bayesian Network and Probabilistic Programming Based Reasoning Framework for Robot Manipulation Under Uncertainty [4.087774077861305]
We propose a flexible and generalisable physics-informed causal Bayesian network (CBN) based framework for a robot.
We demonstrate our framework's ability to: (1) predict manipulation outcomes with high accuracy (Pred Acc: 88.6%); and, (2) perform greedy next-best action selection with 94.2% task success rate.
arXiv Detail & Related papers (2024-03-21T15:36:26Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis [82.59451639072073]
General-purpose robots operate seamlessly in any environment, with any object, and utilize various skills to complete diverse tasks.
As a community, we have been constraining most robotic systems by designing them for specific tasks, training them on specific datasets, and deploying them within specific environments.
Motivated by the impressive open-set performance and content generation capabilities of web-scale, large-capacity pre-trained models, we devote this survey to exploring how foundation models can be applied to general-purpose robotics.
arXiv Detail & Related papers (2023-12-14T10:02:55Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - DeepSym: Deep Symbol Generation and Rule Learning from Unsupervised
Continuous Robot Interaction for Planning [1.3854111346209868]
A robot arm-hand system learns symbols that can be interpreted as 'rollable', 'insertable', 'larger-than' from its push and stack actions.
Our system is verified in a physics-based 3d simulation environment where a robot arm-hand system learned symbols that can be interpreted as 'rollable', 'insertable', 'larger-than' from its push and stack actions.
arXiv Detail & Related papers (2020-12-04T11:26:06Z) - Designing Environments Conducive to Interpretable Robot Behavior [35.95540723324049]
We investigate the opportunities and limitations of environment design as a tool to promote a type of interpretable behavior.
We formulate a novel environment design framework that considers design over multiple tasks and over a time horizon.
arXiv Detail & Related papers (2020-07-02T00:50:10Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z) - Autonomous Planning Based on Spatial Concepts to Tidy Up Home
Environments with Service Robots [5.739787445246959]
We propose a novel planning method that can efficiently estimate the order and positions of the objects to be tidied up by learning the parameters of a probabilistic generative model.
The model allows a robot to learn the distributions of the co-occurrence probability of the objects and places to tidy up using the multimodal sensor information collected in a tidied environment.
We evaluate the effectiveness of the proposed method by an experimental simulation that reproduces the conditions of the Tidy Up Here task of the World Robot Summit 2018 international robotics competition.
arXiv Detail & Related papers (2020-02-10T11:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.