Hierarchical Affordance Discovery using Intrinsic Motivation
- URL: http://arxiv.org/abs/2009.10968v1
- Date: Wed, 23 Sep 2020 07:18:21 GMT
- Title: Hierarchical Affordance Discovery using Intrinsic Motivation
- Authors: Alexandre Manoury (IMT Atlantique - INFO), Sao Mai Nguyen, C\'edric
Buche
- Abstract summary: We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
- Score: 69.9674326582747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To be capable of lifelong learning in a real-life environment, robots have to
tackle multiple challenges. Being able to relate physical properties they may
observe in their environment to possible interactions they may have is one of
them. This skill, named affordance learning, is strongly related to embodiment
and is mastered through each person's development: each individual learns
affordances differently through their own interactions with their surroundings.
Current methods for affordance learning usually use either fixed actions to
learn these affordances or focus on static setups involving a robotic arm to be
operated. In this article, we propose an algorithm using intrinsic motivation
to guide the learning of affordances for a mobile robot. This algorithm is
capable to autonomously discover, learn and adapt interrelated affordances
without pre-programmed actions. Once learned, these affordances may be used by
the algorithm to plan sequences of actions in order to perform tasks of various
difficulties. We then present one experiment and analyse our system before
comparing it with other approaches from reinforcement learning and affordance
learning.
Related papers
- Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation [17.222197596599685]
We propose a Skill Learning approach that discovers composable behaviors by solving a large number of autonomously generated tasks.
Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment.
The learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.
arXiv Detail & Related papers (2024-10-07T09:19:13Z) - Evaluating Continual Learning on a Home Robot [30.620205237707342]
We show how continual learning methods can be adapted for use on a real, low-cost home robot.
We propose SANER, a method for continuously learning a library of skills, and ABIP, as the backbone to support it.
arXiv Detail & Related papers (2023-06-04T17:14:49Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Affordances from Human Videos as a Versatile Representation for Robotics [31.248842798600606]
We train a visual affordance model that estimates where and how in the scene a human is likely to interact.
The structure of these behavioral affordances directly enables the robot to perform many complex tasks.
We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild.
arXiv Detail & Related papers (2023-04-17T17:59:34Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Inducing Structure in Reward Learning by Learning Features [31.413656752926208]
We introduce a novel type of human input for teaching features and an algorithm that utilizes it to learn complex features from the raw state space.
We demonstrate our method in settings where all features have to be learned from scratch, as well as where some of the features are known.
arXiv Detail & Related papers (2022-01-18T16:02:29Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Dual-Arm Adversarial Robot Learning [0.6091702876917281]
We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
arXiv Detail & Related papers (2021-10-15T12:51:57Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.