Recent Advances of Deep Robotic Affordance Learning: A Reinforcement
Learning Perspective
- URL: http://arxiv.org/abs/2303.05344v2
- Date: Fri, 10 Mar 2023 18:14:19 GMT
- Title: Recent Advances of Deep Robotic Affordance Learning: A Reinforcement
Learning Perspective
- Authors: Xintong Yang, Ze Ji, Jing Wu, Yu-kun Lai
- Abstract summary: Deep robotic affordance learning (DRAL) aims to develop data-driven methods that use the concept of affordance to aid in robotic tasks.
We first classify these papers from a reinforcement learning (RL) perspective, and draw connections between RL and affordances.
A final remark is given at the end to propose a promising future direction of the RL-based affordance definition to include the predictions of arbitrary action consequences.
- Score: 44.968170318777105
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As a popular concept proposed in the field of psychology, affordance has been
regarded as one of the important abilities that enable humans to understand and
interact with the environment. Briefly, it captures the possibilities and
effects of the actions of an agent applied to a specific object or, more
generally, a part of the environment. This paper provides a short review of the
recent developments of deep robotic affordance learning (DRAL), which aims to
develop data-driven methods that use the concept of affordance to aid in
robotic tasks. We first classify these papers from a reinforcement learning
(RL) perspective, and draw connections between RL and affordances. The
technical details of each category are discussed and their limitations
identified. We further summarise them and identify future challenges from the
aspects of observations, actions, affordance representation, data-collection
and real-world deployment. A final remark is given at the end to propose a
promising future direction of the RL-based affordance definition to include the
predictions of arbitrary action consequences.
Related papers
- Human Action Anticipation: A Survey [86.415721659234]
The literature on behavior prediction spans various tasks, including action anticipation, activity forecasting, intent prediction, goal prediction, and so on.
Our survey aims to tie together this fragmented literature, covering recent technical innovations as well as the development of new large-scale datasets for model training and evaluation.
arXiv Detail & Related papers (2024-10-17T21:37:40Z) - On the Element-Wise Representation and Reasoning in Zero-Shot Image Recognition: A Systematic Survey [82.49623756124357]
Zero-shot image recognition (ZSIR) aims at empowering models to recognize and reason in unseen domains.
This paper presents a broad review of recent advances in element-wise ZSIR.
We first attempt to integrate the three basic ZSIR tasks of object recognition, compositional recognition, and foundation model-based open-world recognition into a unified element-wise perspective.
arXiv Detail & Related papers (2024-08-09T05:49:21Z) - On the Role of Entity and Event Level Conceptualization in Generalizable Reasoning: A Survey of Tasks, Methods, Applications, and Future Directions [46.63556358247516]
Entity- and event-level conceptualization plays a pivotal role in generalizable reasoning.
There is currently a lack of a systematic overview that comprehensively examines existing works in the definition, execution, and application of conceptualization.
We present the first comprehensive survey of 150+ papers, categorizing various definitions, resources, methods, and downstream applications related to conceptualization into a unified taxonomy.
arXiv Detail & Related papers (2024-06-16T10:32:41Z) - A Survey on Deep Learning Techniques for Action Anticipation [12.336150312807561]
We review the recent advances of action anticipation algorithms with a particular focus on daily-living scenarios.
We classify these methods according to their primary contributions and summarize them in tabular form.
We delve into the common evaluation metrics and datasets used for action anticipation and provide future directions with systematical discussions.
arXiv Detail & Related papers (2023-09-29T14:07:56Z) - A Closer Look at Reward Decomposition for High-Level Robotic
Explanations [18.019811754800767]
We propose an explainable Q-Map learning framework that combines reward decomposition with abstracted action spaces.
We demonstrate the effectiveness of our framework through quantitative and qualitative analysis of two robotic scenarios.
arXiv Detail & Related papers (2023-04-25T16:01:42Z) - Predicting the Future from First Person (Egocentric) Vision: A Survey [18.07516837332113]
This survey summarises the evolution of studies in the context of future prediction from egocentric vision.
It makes an overview of applications, devices, existing problems, commonly used datasets, models and input modalities.
Our analysis highlights that methods for future prediction from egocentric vision can have a significant impact in a range of applications.
arXiv Detail & Related papers (2021-07-28T14:58:13Z) - Understanding the origin of information-seeking exploration in
probabilistic objectives for control [62.997667081978825]
An exploration-exploitation trade-off is central to the description of adaptive behaviour.
One approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive'
We show that this combination of utility maximizing and information-seeking behaviour arises from the minimization of an entirely difference class of objectives.
arXiv Detail & Related papers (2021-03-11T18:42:39Z) - Learning Long-term Visual Dynamics with Region Proposal Interaction
Networks [75.06423516419862]
We build object representations that can capture inter-object and object-environment interactions over a long-range.
Thanks to the simple yet effective object representation, our approach outperforms prior methods by a significant margin.
arXiv Detail & Related papers (2020-08-05T17:48:00Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.