Building Affordance Relations for Robotic Agents - A Review
- URL: http://arxiv.org/abs/2105.06706v1
- Date: Fri, 14 May 2021 08:35:18 GMT
- Title: Building Affordance Relations for Robotic Agents - A Review
- Authors: Paola Ard\'on, \`Eric Pairet, Katrin S. Lohan, Subramanian
Ramamoorthy, Ronald P. A. Petrick
- Abstract summary: Affordances describe the possibilities for an agent to perform actions with an object.
We review and find common ground amongst different strategies that use the concept of affordances within robotic tasks.
We identify and discuss a range of interesting research directions involving affordances that have the potential to improve the capabilities of an AI agent.
- Score: 7.50722199393581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Affordances describe the possibilities for an agent to perform actions with
an object. While the significance of the affordance concept has been previously
studied from varied perspectives, such as psychology and cognitive science,
these approaches are not always sufficient to enable direct transfer, in the
sense of implementations, to artificial intelligence (AI)-based systems and
robotics. However, many efforts have been made to pragmatically employ the
concept of affordances, as it represents great potential for AI agents to
effectively bridge perception to action. In this survey, we review and find
common ground amongst different strategies that use the concept of affordances
within robotic tasks, and build on these methods to provide guidance for
including affordances as a mechanism to improve autonomy. To this end, we
outline common design choices for building representations of affordance
relations, and their implications on the generalisation capabilities of an
agent when facing previously unseen scenarios. Finally, we identify and discuss
a range of interesting research directions involving affordances that have the
potential to improve the capabilities of an AI agent.
Related papers
- Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - AI-Driven Human-Autonomy Teaming in Tactical Operations: Proposed Framework, Challenges, and Future Directions [10.16399860867284]
Artificial Intelligence (AI) techniques are transforming tactical operations by augmenting human decision-making capabilities.
This paper explores AI-driven Human-Autonomy Teaming (HAT) as a transformative approach.
We propose a comprehensive framework that addresses the key components of AI-driven HAT.
arXiv Detail & Related papers (2024-10-28T15:05:16Z) - The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey [0.0]
This paper examines the recent advancements in AI agent implementations.
It focuses on their ability to achieve complex goals that require enhanced reasoning, planning, and tool execution capabilities.
arXiv Detail & Related papers (2024-04-17T17:32:41Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Learning Action-Effect Dynamics for Hypothetical Vision-Language
Reasoning Task [50.72283841720014]
We propose a novel learning strategy that can improve reasoning about the effects of actions.
We demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability.
arXiv Detail & Related papers (2022-12-07T05:41:58Z) - Introspection-based Explainable Reinforcement Learning in Episodic and
Non-episodic Scenarios [14.863872352905629]
introspection-based approach can be used in conjunction with reinforcement learning agents to provide probabilities of success.
Introspection-based approach can be used to generate explanations for the actions taken in a non-episodic robotics environment as well.
arXiv Detail & Related papers (2022-11-23T13:05:52Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.