Causal Navigation by Continuous-time Neural Networks
- URL: http://arxiv.org/abs/2106.08314v1
- Date: Tue, 15 Jun 2021 17:45:32 GMT
- Title: Causal Navigation by Continuous-time Neural Networks
- Authors: Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner,
Daniela Rus
- Abstract summary: We propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks.
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks.
- Score: 108.84958284162857
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Imitation learning enables high-fidelity, vision-based learning of policies
within rich, photorealistic environments. However, such techniques often rely
on traditional discrete-time neural models and face difficulties in
generalizing to domain shifts by failing to account for the causal
relationships between the agent and the environment. In this paper, we propose
a theoretical and experimental framework for learning causal representations
using continuous-time neural networks, specifically over their discrete-time
counterparts. We evaluate our method in the context of visual-control learning
of drones over a series of complex tasks, ranging from short- and long-term
navigation, to chasing static and dynamic objects through photorealistic
environments. Our results demonstrate that causal continuous-time deep models
can perform robust navigation tasks, where advanced recurrent models fail.
These models learn complex causal control representations directly from raw
visual inputs and scale to solve a variety of tasks using imitation learning.
Related papers
- Continual Learning of Conjugated Visual Representations through Higher-order Motion Flows [21.17248975377718]
Learning with neural networks presents several challenges due to the non-i.i.d. nature of the data.
It also offers novel opportunities to develop representations that are consistent with the information flow.
In this paper we investigate the case of unsupervised continual learning of pixel-wise features subject to multiple motion-induced constraints.
arXiv Detail & Related papers (2024-09-16T19:08:32Z) - A Role of Environmental Complexity on Representation Learning in Deep Reinforcement Learning Agents [3.7314353481448337]
We developed a simulated navigation environment to train deep reinforcement learning agents.
We modulated the frequency of exposure to a shortcut and navigation cue, leading to the development of artificial agents with differing abilities.
We examined the encoded representations in artificial neural networks driving these agents, revealing intricate dynamics in representation learning.
arXiv Detail & Related papers (2024-07-03T18:27:26Z) - Generative learning for nonlinear dynamics [7.6146285961466]
generative machine learning models create realistic outputs far beyond their training data.
These successes suggest that generative models learn to effectively parametrize and sample arbitrarily complex distributions.
We aim to connect these classical works to emerging themes in large-scale generative statistical learning.
arXiv Detail & Related papers (2023-11-07T16:53:56Z) - Interpretable Computer Vision Models through Adversarial Training:
Unveiling the Robustness-Interpretability Connection [0.0]
Interpretability is as essential as robustness when we deploy the models to the real world.
Standard models, compared to robust are more susceptible to adversarial attacks, and their learned representations are less meaningful to humans.
arXiv Detail & Related papers (2023-07-04T13:51:55Z) - Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks? [18.92828441607381]
We show that embeddings induce a model that fits observations well but simultaneously has incorrect dynamical behaviours.
We propose a simple embedding-free alternative based on parametrising two additive vector-field components.
arXiv Detail & Related papers (2023-05-20T12:41:47Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Learning Temporal Dynamics from Cycles in Narrated Video [85.89096034281694]
We propose a self-supervised solution to the problem of learning to model how the world changes as time elapses.
Our model learns modality-agnostic functions to predict forward and backward in time, which must undo each other when composed.
We apply the learned dynamics model without further training to various tasks, such as predicting future action and temporally ordering sets of images.
arXiv Detail & Related papers (2021-01-07T02:41:32Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.