Towards a Causal Probabilistic Framework for Prediction,
Action-Selection & Explanations for Robot Block-Stacking Tasks
- URL: http://arxiv.org/abs/2308.06203v2
- Date: Fri, 29 Sep 2023 00:19:11 GMT
- Title: Towards a Causal Probabilistic Framework for Prediction,
Action-Selection & Explanations for Robot Block-Stacking Tasks
- Authors: Ricardo Cannizzaro, Jonathan Routley, and Lars Kunze
- Abstract summary: Causal models provide a principled framework to encode formal knowledge of the causal relationships that govern the robot's interaction with its environment.
We propose a novel causal probabilistic framework to embed a physics simulation capability into a structural causal model to permit robots to perceive and assess the current state of a block-stacking task.
- Score: 4.244706520140677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainties in the real world mean that is impossible for system designers
to anticipate and explicitly design for all scenarios that a robot might
encounter. Thus, robots designed like this are fragile and fail outside of
highly-controlled environments. Causal models provide a principled framework to
encode formal knowledge of the causal relationships that govern the robot's
interaction with its environment, in addition to probabilistic representations
of noise and uncertainty typically encountered by real-world robots. Combined
with causal inference, these models permit an autonomous agent to understand,
reason about, and explain its environment. In this work, we focus on the
problem of a robot block-stacking task due to the fundamental perception and
manipulation capabilities it demonstrates, required by many applications
including warehouse logistics and domestic human support robotics. We propose a
novel causal probabilistic framework to embed a physics simulation capability
into a structural causal model to permit robots to perceive and assess the
current state of a block-stacking task, reason about the next-best action from
placement candidates, and generate post-hoc counterfactual explanations. We
provide exemplar next-best action selection results and outline planned
experimentation in simulated and real-world robot block-stacking tasks.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Physics-Based Causal Reasoning for Safe & Robust Next-Best Action Selection in Robot Manipulation Tasks [4.087774077861305]
We present a physics-informed causal-inference-based framework for a robot to probabilistically reason about candidate actions in a block stacking task.
We show that by embedding physics-based causal reasoning into robots' decision-making processes, we can make robot task execution safer, more reliable, and more robust to various types of uncertainty.
arXiv Detail & Related papers (2024-03-21T15:36:26Z) - RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis [102.1876259853457]
We propose a tree-structured multimodal code generation framework for generalized robotic behavior synthesis, termed RoboCodeX.
RoboCodeX decomposes high-level human instructions into multiple object-centric manipulation units consisting of physical preferences such as affordance and safety constraints.
To further enhance the capability to map conceptual and perceptual understanding into control commands, a specialized multimodal reasoning dataset is collected for pre-training and an iterative self-updating methodology is introduced for supervised fine-tuning.
arXiv Detail & Related papers (2024-02-25T15:31:43Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Toward General-Purpose Robots via Foundation Models: A Survey and
Meta-Analysis [73.89558418030418]
Most existing robotic systems have been designed for specific tasks, trained on specific datasets, and deployed within specific environments.
Motivated by the impressive open-set performance and content generation capabilities of web-scale, large-capacity pre-trained models, we devote this survey to exploring how foundation models can be applied to robotics.
arXiv Detail & Related papers (2023-12-14T10:02:55Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - DeepSym: Deep Symbol Generation and Rule Learning from Unsupervised
Continuous Robot Interaction for Planning [1.3854111346209868]
A robot arm-hand system learns symbols that can be interpreted as 'rollable', 'insertable', 'larger-than' from its push and stack actions.
Our system is verified in a physics-based 3d simulation environment where a robot arm-hand system learned symbols that can be interpreted as 'rollable', 'insertable', 'larger-than' from its push and stack actions.
arXiv Detail & Related papers (2020-12-04T11:26:06Z) - Designing Environments Conducive to Interpretable Robot Behavior [35.95540723324049]
We investigate the opportunities and limitations of environment design as a tool to promote a type of interpretable behavior.
We formulate a novel environment design framework that considers design over multiple tasks and over a time horizon.
arXiv Detail & Related papers (2020-07-02T00:50:10Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z) - Autonomous Planning Based on Spatial Concepts to Tidy Up Home
Environments with Service Robots [5.739787445246959]
We propose a novel planning method that can efficiently estimate the order and positions of the objects to be tidied up by learning the parameters of a probabilistic generative model.
The model allows a robot to learn the distributions of the co-occurrence probability of the objects and places to tidy up using the multimodal sensor information collected in a tidied environment.
We evaluate the effectiveness of the proposed method by an experimental simulation that reproduces the conditions of the Tidy Up Here task of the World Robot Summit 2018 international robotics competition.
arXiv Detail & Related papers (2020-02-10T11:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.