Deluca -- A Differentiable Control Library: Environments, Methods, and
Benchmarking
- URL: http://arxiv.org/abs/2102.09968v1
- Date: Fri, 19 Feb 2021 15:06:47 GMT
- Title: Deluca -- A Differentiable Control Library: Environments, Methods, and
Benchmarking
- Authors: Paula Gradu, John Hallman, Daniel Suo, Alex Yu, Naman Agarwal, Udaya
Ghai, Karan Singh, Cyril Zhang, Anirudha Majumdar, Elad Hazan
- Abstract summary: We present an open-source library of differentiable physics and robotics environments.
The library features several popular environments, including classical control settings from OpenAI Gym.
We give several use-cases of new scientific results obtained using the library.
- Score: 52.44199258132215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an open-source library of natively differentiable physics and
robotics environments, accompanied by gradient-based control methods and a
benchmark-ing suite. The introduced environments allow auto-differentiation
through the simulation dynamics, and thereby permit fast training of
controllers. The library features several popular environments, including
classical control settings from OpenAI Gym. We also provide a novel
differentiable environment, based on deep neural networks, that simulates
medical ventilation. We give several use-cases of new scientific results
obtained using the library. This includes a medical ventilator simulator and
controller, an adaptive control method for time-varying linear dynamical
systems, and new gradient-based methods for control of linear dynamical systems
with adversarial perturbations.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - NeuralFluid: Neural Fluidic System Design and Control with Differentiable Simulation [36.0759668955729]
We present a novel framework to explore neural control and design of complex fluidic systems with dynamic solid boundaries.
Our system features a fast differentiable Navier-Stokes solver with solid-fluid interface handling.
We present a benchmark of design, control, and learning tasks on high-fidelity, high-resolution dynamic fluid environments.
arXiv Detail & Related papers (2024-05-22T21:16:59Z) - Learning Variable Impedance Control for Aerial Sliding on Uneven
Heterogeneous Surfaces by Proprioceptive and Tactile Sensing [42.27572349747162]
We present a learning-based adaptive control strategy for aerial sliding tasks.
The proposed controller structure combines data-driven and model-based control methods.
Compared to fine-tuned state of the art interaction control methods we achieve reduced tracking error and improved disturbance rejection.
arXiv Detail & Related papers (2022-06-28T16:28:59Z) - Control of Two-way Coupled Fluid Systems with Differentiable Solvers [22.435002906710803]
We investigate the use of deep neural networks to control complex nonlinear dynamical systems.
We solve the Navier Stokes equations with two way coupling, which gives rise to nonlinear perturbations.
We show that controllers trained with our approach outperform a variety of classical and learned alternatives in terms of evaluation metrics and generalization capabilities.
arXiv Detail & Related papers (2022-06-01T09:12:08Z) - An Optical Control Environment for Benchmarking Reinforcement Learning
Algorithms [7.6418236982756955]
Deep reinforcement learning has the potential to address various scientific problems.
In this paper, we implement an optics simulation environment for learning based controllers.
Results demonstrate the superiority of the environment over traditional simulation environments.
arXiv Detail & Related papers (2022-03-23T00:59:35Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Learning-based vs Model-free Adaptive Control of a MAV under Wind Gust [0.2770822269241973]
Navigation problems under unknown varying conditions are among the most important and well-studied problems in the control field.
Recent model-free adaptive control methods aim at removing this dependency by learning the physical characteristics of the plant directly from sensor feedback.
We propose a conceptually simple learning-based approach composed of a full state feedback controller, tuned robustly by a deep reinforcement learning framework.
arXiv Detail & Related papers (2021-01-29T10:13:56Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.