Deep hybrid models: infer and plan in a dynamic world
- URL: http://arxiv.org/abs/2402.10088v3
- Date: Wed, 18 Dec 2024 17:05:38 GMT
- Title: Deep hybrid models: infer and plan in a dynamic world
- Authors: Matteo Priorelli, Ivilin Peev Stoianov,
- Abstract summary: We present a solution, based on active inference, for complex control tasks.
The proposed architecture exploits hybrid (discrete and continuous) processing.
We show that the model can tackle the presented task under different conditions.
- Score: 0.0
- License:
- Abstract: In order to determine an optimal plan for a complex task, one often deals with dynamic and hierarchical relationships between several entities. Traditionally, such problems are tackled with optimal control, which relies on the optimization of cost functions; instead, a recent biologically-motivated proposal casts planning and control as an inference process. Active inference assumes that action and perception are two complementary aspects of life whereby the role of the former is to fulfill the predictions inferred by the latter. In this study, we present a solution, based on active inference, for complex control tasks. The proposed architecture exploits hybrid (discrete and continuous) processing, and it is based on three features: the representation of potential body configurations related to the objects of interest; the use of hierarchical relationships that enable the agent to flexibly expand its body schema for tool use; the definition of potential trajectories related to the agent's intentions, used to infer and plan with dynamic elements at different temporal scales. We evaluate this deep hybrid model on a habitual task: reaching a moving object after having picked a moving tool. We show that the model can tackle the presented task under different conditions. This study extends past work on planning as inference and advances an alternative direction to optimal control.
Related papers
- Planning-Guided Diffusion Policy Learning for Generalizable Contact-Rich Bimanual Manipulation [16.244250979166214]
Generalizable Planning-Guided Diffusion Policy Learning (GLIDE) is an approach that learns to solve contact-rich bimanual manipulation tasks.
We propose a set of essential design options in feature extraction, task representation, action prediction, and data augmentation.
Our approach can enable a bimanual robotic system to effectively manipulate objects of diverse geometries, dimensions, and physical properties.
arXiv Detail & Related papers (2024-12-03T18:51:39Z) - Adaptive Planning with Generative Models under Uncertainty [20.922248169620783]
Planning with generative models has emerged as an effective decision-making paradigm across a wide range of domains.
While continuous replanning at each timestep might seem intuitive because it allows decisions to be made based on the most recent environmental observations, it results in substantial computational challenges.
Our work addresses this challenge by introducing a simple adaptive planning policy that leverages the generative model's ability to predict long-horizon state trajectories.
arXiv Detail & Related papers (2024-08-02T18:07:53Z) - Online Pareto-Optimal Decision-Making for Complex Tasks using Active Inference [14.470714123175972]
This paper introduces a novel framework for multi-objective reinforcement learning.
It ensures safe task execution, optimize trade-offs between objectives, and adheres to user preferences.
Case studies and benchmarks on both manipulation and mobile robots show that our framework outperforms other methods.
arXiv Detail & Related papers (2024-06-17T18:03:45Z) - Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning [51.52387511006586]
We propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm.
HOP is hierarchically composed of two modules: an opponent modeling module that infers others' goals and learns corresponding goal-conditioned policies.
HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios.
arXiv Detail & Related papers (2024-06-12T08:48:06Z) - Compositional Foundation Models for Hierarchical Planning [52.18904315515153]
We propose a foundation model which leverages expert foundation model trained on language, vision and action data individually together to solve long-horizon tasks.
We use a large language model to construct symbolic plans that are grounded in the environment through a large video diffusion model.
Generated video plans are then grounded to visual-motor control, through an inverse dynamics model that infers actions from generated videos.
arXiv Detail & Related papers (2023-09-15T17:44:05Z) - Optimal task and motion planning and execution for human-robot
multi-agent systems in dynamic environments [54.39292848359306]
We propose a combined task and motion planning approach to optimize sequencing, assignment, and execution of tasks.
The framework relies on decoupling tasks and actions, where an action is one possible geometric realization of a symbolic task.
We demonstrate the approach effectiveness in a collaborative manufacturing scenario, in which a robotic arm and a human worker shall assemble a mosaic.
arXiv Detail & Related papers (2023-03-27T01:50:45Z) - Scene Transformer: A unified multi-task model for behavior prediction
and planning [42.758178896204036]
We formulate a model for predicting the behavior of all agents jointly in real-world driving environments.
Inspired by recent language modeling approaches, we use a masking strategy as the query to our model.
We evaluate our approach on autonomous driving datasets for behavior prediction, and achieve state-of-the-art performance.
arXiv Detail & Related papers (2021-06-15T20:20:44Z) - A Consciousness-Inspired Planning Agent for Model-Based Reinforcement
Learning [104.3643447579578]
We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state.
The design allows agents to learn to plan effectively, by attending to the relevant objects, leading to better out-of-distribution generalization.
arXiv Detail & Related papers (2021-06-03T19:35:19Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.