Active Causal Structure Learning with Latent Variables: Towards Learning to Detour in Autonomous Robots
- URL: http://arxiv.org/abs/2410.20894v1
- Date: Mon, 28 Oct 2024 10:21:26 GMT
- Title: Active Causal Structure Learning with Latent Variables: Towards Learning to Detour in Autonomous Robots
- Authors: Pablo de los Riscos, Fernando Corbacho,
- Abstract summary: Artificial General Intelligence (AGI) Agents and Robots must be able to cope with everchanging environments and tasks.
We claim that active causal structure learning with latent variables (ACSLWL) is a necessary component to build AGI agents and robots.
- Score: 49.1574468325115
- License:
- Abstract: Artificial General Intelligence (AGI) Agents and Robots must be able to cope with everchanging environments and tasks. They must be able to actively construct new internal causal models of their interactions with the environment when new structural changes take place in the environment. Thus, we claim that active causal structure learning with latent variables (ACSLWL) is a necessary component to build AGI agents and robots. This paper describes how a complex planning and expectation-based detour behavior can be learned by ACSLWL when, unexpectedly, and for the first time, the simulated robot encounters a sort of transparent barrier in its pathway towards its target. ACSWL consists of acting in the environment, discovering new causal relations, constructing new causal models, exploiting the causal models to maximize its expected utility, detecting possible latent variables when unexpected observations occur, and constructing new structures-internal causal models and optimal estimation of the associated parameters, to be able to cope efficiently with the new encountered situations. That is, the agent must be able to construct new causal internal models that transform a previously unexpected and inefficient (sub-optimal) situation, into a predictable situation with an optimal operating plan.
Related papers
- Causal Reinforcement Learning for Optimisation of Robot Dynamics in Unknown Environments [4.494898338391223]
This work introduces a novel Causal Reinforcement Learning approach to enhancing robotics operations.
Our proposed machine learning architecture enables robots to learn the causal relationships between the visual characteristics of the objects.
arXiv Detail & Related papers (2024-09-20T11:40:51Z) - Foundation Models for Autonomous Robots in Unstructured Environments [15.517532442044962]
The study systematically reviews application of foundation models in two field of robotic and unstructured environment.
Findings showed that linguistic capabilities of LLMs have been utilized more than other features for improving perception in human-robot interactions.
The use of LLMs demonstrated more applications in project management and safety in construction, and natural hazard detection in disaster management.
arXiv Detail & Related papers (2024-07-19T13:26:52Z) - Variable-Agnostic Causal Exploration for Reinforcement Learning [56.52768265734155]
We introduce a novel framework, Variable-Agnostic Causal Exploration for Reinforcement Learning (VACERL)
Our approach automatically identifies crucial observation-action steps associated with key variables using attention mechanisms.
It constructs the causal graph connecting these steps, which guides the agent towards observation-action pairs with greater causal influence on task completion.
arXiv Detail & Related papers (2024-07-17T09:45:27Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Neural-Logic Human-Object Interaction Detection [67.4993347702353]
We present L OGIC HOI, a new HOI detector that leverages neural-logic reasoning and Transformer to infer feasible interactions between entities.
Specifically, we modify the self-attention mechanism in vanilla Transformer, enabling it to reason over the human, action, object> triplet and constitute novel interactions.
We formulate these two properties in first-order logic and ground them into continuous space to constrain the learning process of our approach, leading to improved performance and zero-shot generalization capabilities.
arXiv Detail & Related papers (2023-11-16T11:47:53Z) - Build generally reusable agent-environment interaction models [28.577502598559988]
This paper tackles the problem of how to pre-train a model and make it generally reusable backbones for downstream task learning.
We propose a method that builds an agent-environment interaction model by learning domain invariant successor features from the agent's vast experiences covering various tasks, then discretize them into behavior prototypes.
We provide preliminary results that show downstream task learning based on a pre-trained embodied set structure can handle unseen changes in task objectives, environmental dynamics and sensor modalities.
arXiv Detail & Related papers (2022-11-13T07:33:14Z) - REPTILE: A Proactive Real-Time Deep Reinforcement Learning Self-adaptive
Framework [0.6335848702857039]
A general framework is proposed to support the development of software systems that are able to adapt their behaviour according to the operating environment changes.
The proposed approach, named REPTILE, works in a complete proactive manner and relies on Deep Reinforcement Learning-based agents to react to events.
In our framework, two types of novelties are taken into account: those related to the context/environment and those related to the physical architecture itself.
The framework, predicting those novelties before their occurrence, extracts time-changing models of the environment and uses a suitable Markov Decision Process to deal with the real-time setting.
arXiv Detail & Related papers (2022-03-28T12:38:08Z) - Modelling Behaviour Change using Cognitive Agent Simulations [0.0]
This paper presents work-in-progress research to apply selected behaviour change theories to simulated agents.
The research is focusing on complex agent architectures required for self-determined goal achievement in adverse circumstances.
arXiv Detail & Related papers (2021-10-16T19:19:08Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.