The Rational Selection of Goal Operations and the Integration ofSearch
Strategies with Goal-Driven Autonomy
- URL: http://arxiv.org/abs/2201.08883v1
- Date: Fri, 21 Jan 2022 20:53:49 GMT
- Title: The Rational Selection of Goal Operations and the Integration ofSearch
Strategies with Goal-Driven Autonomy
- Authors: Sravya Kondrakunta, Venkatsampath Raja Gogineni, Michael T. Cox,
Demetris Coleman, Xiaobao Tan, Tony Lin, Mengxue Hou, Fumin Zhang, Frank
McQuarrie, Catherine R. Edwards
- Abstract summary: Link between cognition and control must manage the problem of converting continuous values from the real world to symbolic representations (and back)
To generate effective behaviors, reasoning must include a capacity to replan, acquire and update new information, detect and respond to anomalies, and perform various operations on system goals.
This paper examines an agent's choices when multiple goal operations co-occur and interact, and it establishes a method of choosing between them.
- Score: 3.169249926144497
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Intelligent physical systems as embodied cognitive systems must perform
high-level reasoning while concurrently managing an underlying control
architecture. The link between cognition and control must manage the problem of
converting continuous values from the real world to symbolic representations
(and back). To generate effective behaviors, reasoning must include a capacity
to replan, acquire and update new information, detect and respond to anomalies,
and perform various operations on system goals. But, these processes are not
independent and need further exploration. This paper examines an agent's
choices when multiple goal operations co-occur and interact, and it establishes
a method of choosing between them. We demonstrate the benefits and discuss the
trade offs involved with this and show positive results in a dynamic marine
search task.
Related papers
- Learning to Look: Seeking Information for Decision Making via Policy Factorization [36.87799092971961]
We propose DISaM, a dual-policy solution composed of an information-seeking policy and an information-receiving policy.
We demonstrate the capabilities of our dual policy solution in five manipulation tasks that require information-seeking behaviors.
arXiv Detail & Related papers (2024-10-24T17:58:11Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Establishing Shared Query Understanding in an Open Multi-Agent System [1.2031796234206138]
We propose a method that allows to develop shared understanding between two agents for the purpose of performing a task that requires cooperation.
Our method focuses on efficiently establishing successful task-oriented communication in an open multi-agent system.
arXiv Detail & Related papers (2023-05-16T11:07:05Z) - Deep Reinforcement Learning for Multi-Agent Interaction [14.532965827043254]
The Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control.
This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.
arXiv Detail & Related papers (2022-08-02T21:55:56Z) - Autonomous Open-Ended Learning of Tasks with Non-Stationary
Interdependencies [64.0476282000118]
Intrinsic motivations have proven to generate a task-agnostic signal to properly allocate the training time amongst goals.
While the majority of works in the field of intrinsically motivated open-ended learning focus on scenarios where goals are independent from each other, only few of them studied the autonomous acquisition of interdependent tasks.
In particular, we first deepen the analysis of a previous system, showing the importance of incorporating information about the relationships between tasks at a higher level of the architecture.
Then we introduce H-GRAIL, a new system that extends the previous one by adding a new learning layer to store the autonomously acquired sequences
arXiv Detail & Related papers (2022-05-16T10:43:01Z) - Robust Event-Driven Interactions in Cooperative Multi-Agent Learning [0.0]
We present an approach to reduce the communication required between agents in a Multi-Agent learning system by exploiting the inherent robustness of the underlying Markov Decision Process.
We compute so-called robustness surrogate functions (off-line), that give agents a conservative indication of how far their state measurements can deviate before they need to update other agents in the system.
This results in fully distributed decision functions, enabling agents to decide when it is necessary to update others.
arXiv Detail & Related papers (2022-04-07T11:00:39Z) - Understanding the origin of information-seeking exploration in
probabilistic objectives for control [62.997667081978825]
An exploration-exploitation trade-off is central to the description of adaptive behaviour.
One approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive'
We show that this combination of utility maximizing and information-seeking behaviour arises from the minimization of an entirely difference class of objectives.
arXiv Detail & Related papers (2021-03-11T18:42:39Z) - Tracking Emotions: Intrinsic Motivation Grounded on Multi-Level
Prediction Error Dynamics [68.8204255655161]
We discuss how emotions arise when differences between expected and actual rates of progress towards a goal are experienced.
We present an intrinsic motivation architecture that generates behaviors towards self-generated and dynamic goals.
arXiv Detail & Related papers (2020-07-29T06:53:13Z) - Mutual Information-based State-Control for Intrinsically Motivated
Reinforcement Learning [102.05692309417047]
In reinforcement learning, an agent learns to reach a set of goals by means of an external reward signal.
In the natural world, intelligent organisms learn from internal drives, bypassing the need for external signals.
We propose to formulate an intrinsic objective as the mutual information between the goal states and the controllable states.
arXiv Detail & Related papers (2020-02-05T19:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.