Towards autonomous artificial agents with an active self: modeling sense
of control in situated action
- URL: http://arxiv.org/abs/2112.05577v1
- Date: Fri, 10 Dec 2021 14:45:24 GMT
- Title: Towards autonomous artificial agents with an active self: modeling sense
of control in situated action
- Authors: Sebastian Kahl, Sebastian Wiese, Nele Russwinkel, Stefan Kopp
- Abstract summary: We present a computational modeling account of an active self in artificial agents.
We focus on how an agent can be equipped with a sense of control and how it arises in autonomous situated action.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a computational modeling account of an active self
in artificial agents. In particular we focus on how an agent can be equipped
with a sense of control and how it arises in autonomous situated action and, in
turn, influences action control. We argue that this requires laying out an
embodied cognitive model that combines bottom-up processes (sensorimotor
learning and fine-grained adaptation of control) with top-down processes
(cognitive processes for strategy selection and decision-making). We present
such a conceptual computational architecture based on principles of predictive
processing and free energy minimization. Using this general model, we describe
how a sense of control can form across the levels of a control hierarchy and
how this can support action control in an unpredictable environment. We present
an implementation of this model as well as first evaluations in a simulated
task scenario, in which an autonomous agent has to cope with un-/predictable
situations and experiences corresponding sense of control. We explore different
model parameter settings that lead to different ways of combining low-level and
high-level action control. The results show the importance of appropriately
weighting information in situations where the need for low/high-level action
control varies and they demonstrate how the sense of control can facilitate
this.
Related papers
- Scenario-based Thermal Management Parametrization Through Deep Reinforcement Learning [0.4218593777811082]
This paper introduces a learning-based tuning approach for thermal management functions.
Our deep reinforcement learning agent processes the tuning task context and incorporates an image-based interpretation of embedded parameter sets.
We demonstrate its applicability to a valve controller parametrization task and verify it in real-world vehicle testing.
arXiv Detail & Related papers (2024-08-04T13:19:45Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - ControlVAE: Model-Based Learning of Generative Controllers for
Physics-Based Characters [28.446959320429656]
We introduce ControlVAE, a model-based framework for learning generative motion control policies based on variational autoencoders (VAE)
Our framework can learn a rich and flexible latent representation of skills and a skill-conditioned generative control policy from a diverse set of unorganized motion sequences.
We demonstrate the effectiveness of ControlVAE using a diverse set of tasks, which allows realistic and interactive control of the simulated characters.
arXiv Detail & Related papers (2022-10-12T10:11:36Z) - Meta-Reinforcement Learning for Adaptive Control of Second Order Systems [3.131740922192114]
In process control, many systems have similar and well-understood dynamics, which suggests it is feasible to create a generalizable controller through meta-learning.
We formulate a meta reinforcement learning (meta-RL) control strategy that takes advantage of known, offline information for training, such as a model structure.
A key design element is the ability to leverage model-based information offline during training, while maintaining a model-free policy structure for interacting with new environments.
arXiv Detail & Related papers (2022-09-19T18:51:33Z) - Isolating and Leveraging Controllable and Noncontrollable Visual
Dynamics in World Models [65.97707691164558]
We present Iso-Dream, which improves the Dream-to-Control framework in two aspects.
First, by optimizing inverse dynamics, we encourage world model to learn controllable and noncontrollable sources.
Second, we optimize the behavior of the agent on the decoupled latent imaginations of the world model.
arXiv Detail & Related papers (2022-05-27T08:07:39Z) - Learning-based vs Model-free Adaptive Control of a MAV under Wind Gust [0.2770822269241973]
Navigation problems under unknown varying conditions are among the most important and well-studied problems in the control field.
Recent model-free adaptive control methods aim at removing this dependency by learning the physical characteristics of the plant directly from sensor feedback.
We propose a conceptually simple learning-based approach composed of a full state feedback controller, tuned robustly by a deep reinforcement learning framework.
arXiv Detail & Related papers (2021-01-29T10:13:56Z) - Instance-Aware Predictive Navigation in Multi-Agent Environments [93.15055834395304]
We propose an Instance-Aware Predictive Control (IPC) approach, which forecasts interactions between agents as well as future scene structures.
We adopt a novel multi-instance event prediction module to estimate the possible interaction among agents in the ego-centric view.
We design a sequential action sampling strategy to better leverage predicted states on both scene-level and instance-level.
arXiv Detail & Related papers (2021-01-14T22:21:25Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z) - From proprioception to long-horizon planning in novel environments: A
hierarchical RL model [4.44317046648898]
In this work, we introduce a simple, three-level hierarchical architecture that reflects different types of reasoning.
We apply our method to a series of navigation tasks in the Mujoco Ant environment.
arXiv Detail & Related papers (2020-06-11T17:19:12Z) - Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill
Primitives [89.34229413345541]
We propose a conditioning scheme which avoids pitfalls by learning the controller and its conditioning in an end-to-end manner.
Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion.
We report significant improvements in task success over representative MPC and IL baselines.
arXiv Detail & Related papers (2020-03-19T15:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.