C3: Learning Congestion Controllers with Formal Certificates
- URL: http://arxiv.org/abs/2412.10915v1
- Date: Sat, 14 Dec 2024 18:02:50 GMT
- Title: C3: Learning Congestion Controllers with Formal Certificates
- Authors: Chenxi Yang, Divyanshu Saxena, Rohit Dwivedula, Kshiteej Mahajan, Swarat Chaudhuri, Aditya Akella,
- Abstract summary: C3 is a new learning framework for congestion control that integrates the concept of formal certification in the learning loop.
C3-trained controllers provide both adaptability and worst-case reliability across a range of network conditions.
- Score: 14.750230453127413
- License:
- Abstract: Learning-based congestion controllers offer better adaptability compared to traditional heuristic algorithms. However, the inherent unreliability of learning techniques can cause learning-based controllers to behave poorly, creating a need for formal guarantees. While methods for formally verifying learned congestion controllers exist, these methods offer binary feedback that cannot optimize the controller toward better behavior. We improve this state-of-the-art via C3, a new learning framework for congestion control that integrates the concept of formal certification in the learning loop. C3 uses an abstract interpreter that can produce robustness and performance certificates to guide the training process, rewarding models that are robust and performant even on worst-case inputs. Our evaluation demonstrates that unlike state-of-the-art learned controllers, C3-trained controllers provide both adaptability and worst-case reliability across a range of network conditions.
Related papers
- Transfer of Safety Controllers Through Learning Deep Inverse Dynamics Model [4.7962647777554634]
Control barrier certificates have proven effective in formally guaranteeing the safety of the control systems.
Design of a control barrier certificate is a time-consuming and computationally expensive endeavor.
We propose a validity condition that, when met, guarantees correctness of the controller.
arXiv Detail & Related papers (2024-05-22T15:28:43Z) - CCM: Adding Conditional Controls to Text-to-Image Consistency Models [89.75377958996305]
We consider alternative strategies for adding ControlNet-like conditional control to Consistency Models.
A lightweight adapter can be jointly optimized under multiple conditions through Consistency Training.
We study these three solutions across various conditional controls, including edge, depth, human pose, low-resolution image and masked image.
arXiv Detail & Related papers (2023-12-12T04:16:03Z) - Reliability Quantification of Deep Reinforcement Learning-based Control [0.0]
This study proposes a method for quantifying the reliability of DRL-based control.
The reliability is quantified using two neural networks: reference and evaluator.
The proposed method was applied to the problem of switching trained models depending on the state.
arXiv Detail & Related papers (2023-09-29T04:49:49Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - A stabilizing reinforcement learning approach for sampled systems with
partially unknown models [0.0]
We suggest a method to guarantee practical stability of the system-controller closed loop in a purely online learning setting.
To achieve the claimed results, we employ techniques of classical adaptive control.
The method is tested in adaptive traction control and cruise control where it proved to significantly reduce the cost.
arXiv Detail & Related papers (2022-08-31T09:20:14Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Adaptive control of a mechatronic system using constrained residual
reinforcement learning [0.0]
We propose a simple, practical and intuitive approach to improve the performance of a conventional controller in uncertain environments.
Our approach is motivated by the observation that conventional controllers in industrial motion control value robustness over adaptivity to deal with different operating conditions.
arXiv Detail & Related papers (2021-10-06T08:13:05Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Comparison of Model Predictive and Reinforcement Learning Methods for
Fault Tolerant Control [2.524528674141466]
We present two adaptive fault-tolerant control schemes for a discrete time system based on hierarchical reinforcement learning.
Experiments demonstrate that reinforcement learning-based controllers perform more robustly than model predictive controllers under faults, partially observable system models, and varying sensor noise levels.
arXiv Detail & Related papers (2020-08-10T20:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.