Active Inference for Robotic Manipulation
- URL: http://arxiv.org/abs/2206.10313v1
- Date: Wed, 1 Jun 2022 12:19:38 GMT
- Title: Active Inference for Robotic Manipulation
- Authors: Tim Schneider, Boris Belousov, Hany Abdulsamad, Jan Peters
- Abstract summary: Active Inference is a theory that deals with partial observability in an explicit manner.
In this work, we apply Active Inference to a hard-to-explore simulated robotic manipulation tasks.
We show that the information-seeking behavior induced by Active Inference allows the agent to explore these challenging, sparse environments systematically.
- Score: 30.692885688744507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robotic manipulation stands as a largely unsolved problem despite significant
advances in robotics and machine learning in the last decades. One of the
central challenges of manipulation is partial observability, as the agent
usually does not know all physical properties of the environment and the
objects it is manipulating in advance. A recently emerging theory that deals
with partial observability in an explicit manner is Active Inference. It does
so by driving the agent to act in a way that is not only goal-directed but also
informative about the environment. In this work, we apply Active Inference to a
hard-to-explore simulated robotic manipulation tasks, in which the agent has to
balance a ball into a target zone. Since the reward of this task is sparse, in
order to explore this environment, the agent has to learn to balance the ball
without any extrinsic feedback, purely driven by its own curiosity. We show
that the information-seeking behavior induced by Active Inference allows the
agent to explore these challenging, sparse environments systematically.
Finally, we conclude that using an information-seeking objective is beneficial
in sparse environments and allows the agent to solve tasks in which methods
that do not exhibit directed exploration fail.
Related papers
- Polaris: Open-ended Interactive Robotic Manipulation via Syn2Real Visual Grounding and Large Language Models [53.22792173053473]
We introduce an interactive robotic manipulation framework called Polaris.
Polaris integrates perception and interaction by utilizing GPT-4 alongside grounded vision models.
We propose a novel Synthetic-to-Real (Syn2Real) pose estimation pipeline.
arXiv Detail & Related papers (2024-08-15T06:40:38Z) - Learning Extrinsic Dexterity with Parameterized Manipulation Primitives [8.7221770019454]
We learn a sequence of actions that utilize the environment to change the object's pose.
Our approach can control the object's state through exploiting interactions between the object, the gripper, and the environment.
We evaluate our approach on picking box-shaped objects of various weight, shape, and friction properties from a constrained table-top workspace.
arXiv Detail & Related papers (2023-10-26T21:28:23Z) - ALAN: Autonomously Exploring Robotic Agents in the Real World [28.65531878636441]
ALAN is an autonomously exploring robotic agent that can perform tasks in the real world with little training and interaction time.
This is enabled by measuring environment change, which reflects object movement and ignores changes in the robot position.
We evaluate our approach on two different real-world play kitchen settings, enabling a robot to efficiently explore and discover manipulation skills.
arXiv Detail & Related papers (2023-02-13T18:59:09Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Active Exploration for Robotic Manipulation [40.39182660794481]
This paper proposes a model-based active exploration approach that enables efficient learning in sparse-reward robotic manipulation tasks.
We evaluate our proposed algorithm in simulation and on a real robot, trained from scratch with our method.
arXiv Detail & Related papers (2022-10-23T18:07:51Z) - Object Manipulation via Visual Target Localization [64.05939029132394]
Training agents to manipulate objects, poses many challenges.
We propose an approach that explores the environment in search for target objects, computes their 3D coordinates once they are located, and then continues to estimate their 3D locations even when the objects are not visible.
Our evaluations show a massive 3x improvement in success rate over a model that has access to the same sensory suite.
arXiv Detail & Related papers (2022-03-15T17:59:01Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z) - Improving Object Permanence using Agent Actions and Reasoning [8.847502932609737]
Existing approaches learn object permanence from low-level perception.
We argue that object permanence can be improved when the robot uses knowledge about executed actions.
arXiv Detail & Related papers (2021-10-01T07:09:49Z) - Learning Affordance Landscapes for Interaction Exploration in 3D
Environments [101.90004767771897]
Embodied agents must be able to master how their environment works.
We introduce a reinforcement learning approach for exploration for interaction.
We demonstrate our idea with AI2-iTHOR.
arXiv Detail & Related papers (2020-08-21T00:29:36Z) - Mutual Information-based State-Control for Intrinsically Motivated
Reinforcement Learning [102.05692309417047]
In reinforcement learning, an agent learns to reach a set of goals by means of an external reward signal.
In the natural world, intelligent organisms learn from internal drives, bypassing the need for external signals.
We propose to formulate an intrinsic objective as the mutual information between the goal states and the controllable states.
arXiv Detail & Related papers (2020-02-05T19:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.