How Are Learned Perception-Based Controllers Impacted by the Limits of
Robust Control?
- URL: http://arxiv.org/abs/2104.00827v1
- Date: Fri, 2 Apr 2021 00:31:31 GMT
- Title: How Are Learned Perception-Based Controllers Impacted by the Limits of
Robust Control?
- Authors: Jingxi Xu, Bruce Lee, Nikolai Matni, Dinesh Jayaraman
- Abstract summary: We revisit the difficulty of optimal control problems in terms of system properties like minimum eigenvalues of controllability/observability gramians.
We ask: to what extent are quantifiable control and perceptual difficulty metrics of a task predictive of the performance and sample complexity of data-driven controllers?
Our results show that the fundamental limits of robust control have corresponding implications for the sample-efficiency and performance of learned perception-based controllers.
- Score: 17.775878968489852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The difficulty of optimal control problems has classically been characterized
in terms of system properties such as minimum eigenvalues of
controllability/observability gramians. We revisit these characterizations in
the context of the increasing popularity of data-driven techniques like
reinforcement learning (RL), and in control settings where input observations
are high-dimensional images and transition dynamics are unknown. Specifically,
we ask: to what extent are quantifiable control and perceptual difficulty
metrics of a task predictive of the performance and sample complexity of
data-driven controllers? We modulate two different types of partial
observability in a cartpole "stick-balancing" problem -- (i) the height of one
visible fixation point on the cartpole, which can be used to tune fundamental
limits of performance achievable by any controller, and by (ii) the level of
perception noise in the fixation point position inferred from depth or RGB
images of the cartpole. In these settings, we empirically study two popular
families of controllers: RL and system identification-based $H_\infty$ control,
using visually estimated system state. Our results show that the fundamental
limits of robust control have corresponding implications for the
sample-efficiency and performance of learned perception-based controllers.
Visit our project website https://jxu.ai/rl-vs-control-web for more
information.
Related papers
- Fine-grained Controllable Video Generation via Object Appearance and
Context [74.23066823064575]
We propose fine-grained controllable video generation (FACTOR) to achieve detailed control.
FACTOR aims to control objects' appearances and context, including their location and category.
Our method achieves controllability of object appearances without finetuning, which reduces the per-subject optimization efforts for the users.
arXiv Detail & Related papers (2023-12-05T17:47:33Z) - Bridging Dimensions: Confident Reachability for High-Dimensional Controllers [3.202200341692044]
This paper takes a step towards connecting exhaustive closed-loop verification with high-dimensional controllers.
Our key insight is that the behavior of a high-dimensional controller can be approximated with several low-dimensional controllers.
Then, we inflate low-dimensional reachability results with statistical approximation errors, yielding a high-confidence reachability guarantee for the high-dimensional controller.
arXiv Detail & Related papers (2023-11-08T17:26:38Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Deep Reinforcement Learning Aided Platoon Control Relying on V2X
Information [78.18186960475974]
The impact of Vehicle-to-Everything (V2X) communications on platoon control performance is investigated.
Our objective is to find the specific set of information that should be shared among the vehicles for the construction of the most appropriate state space.
More meritorious information is given higher priority in transmission, since including it in the state space has a higher probability in offsetting the negative effect of having higher state dimensions.
arXiv Detail & Related papers (2022-03-28T02:11:54Z) - Steady-State Error Compensation in Reference Tracking and Disturbance
Rejection Problems for Reinforcement Learning-Based Control [0.9023847175654602]
Reinforcement learning (RL) is a promising, upcoming topic in automatic control applications.
Initiative action state augmentation (IASA) for actor-critic-based RL controllers is introduced.
This augmentation does not require any expert knowledge, leaving the approach model free.
arXiv Detail & Related papers (2022-01-31T16:29:19Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Is Disentanglement enough? On Latent Representations for Controllable
Music Generation [78.8942067357231]
In the absence of a strong generative decoder, disentanglement does not necessarily imply controllability.
The structure of the latent space with respect to the VAE-decoder plays an important role in boosting the ability of a generative model to manipulate different attributes.
arXiv Detail & Related papers (2021-08-01T18:37:43Z) - Residual Feedback Learning for Contact-Rich Manipulation Tasks with
Uncertainty [22.276925045008788]
emphglsrpl offers a formulation to improve existing controllers with reinforcement learning (RL)
We show superior performance of our approach on a contact-rich peg-insertion task under position and orientation uncertainty.
arXiv Detail & Related papers (2021-06-08T13:06:35Z) - Comparison of Model Predictive and Reinforcement Learning Methods for
Fault Tolerant Control [2.524528674141466]
We present two adaptive fault-tolerant control schemes for a discrete time system based on hierarchical reinforcement learning.
Experiments demonstrate that reinforcement learning-based controllers perform more robustly than model predictive controllers under faults, partially observable system models, and varying sensor noise levels.
arXiv Detail & Related papers (2020-08-10T20:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.