Learning Stability Certificates from Data
- URL: http://arxiv.org/abs/2008.05952v2
- Date: Mon, 14 Sep 2020 17:47:06 GMT
- Title: Learning Stability Certificates from Data
- Authors: Nicholas M. Boffi and Stephen Tu and Nikolai Matni and Jean-Jacques E.
Slotine and Vikas Sindhwani
- Abstract summary: We develop algorithms for learning certificate functions only from trajectory data.
We convert such generalization error bounds into global stability guarantees.
We demonstrate empirically that certificates for complex dynamics can be efficiently learned.
- Score: 19.381365606166725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many existing tools in nonlinear control theory for establishing stability or
safety of a dynamical system can be distilled to the construction of a
certificate function that guarantees a desired property. However, algorithms
for synthesizing certificate functions typically require a closed-form
analytical expression of the underlying dynamics, which rules out their use on
many modern robotic platforms. To circumvent this issue, we develop algorithms
for learning certificate functions only from trajectory data. We establish
bounds on the generalization error - the probability that a certificate will
not certify a new, unseen trajectory - when learning from trajectories, and we
convert such generalization error bounds into global stability guarantees. We
demonstrate empirically that certificates for complex dynamics can be
efficiently learned, and that the learned certificates can be used for
downstream tasks such as adaptive control.
Related papers
- Towards Certified Unlearning for Deep Neural Networks [50.816473152067104]
certified unlearning has been extensively studied in convex machine learning models.
We propose several techniques to bridge the gap between certified unlearning and deep neural networks (DNNs)
arXiv Detail & Related papers (2024-08-01T21:22:10Z) - FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks [62.897993591443594]
FullCert is the first end-to-end certifier with sound, deterministic bounds.
We experimentally demonstrate FullCert's feasibility on two datasets.
arXiv Detail & Related papers (2024-06-17T13:23:52Z) - Self-consistent Validation for Machine Learning Electronic Structure [81.54661501506185]
Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
arXiv Detail & Related papers (2024-02-15T18:41:35Z) - Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - Safe Online Dynamics Learning with Initially Unknown Models and
Infeasible Safety Certificates [45.72598064481916]
This paper considers a learning-based setting with a robust safety certificate based on a control barrier function (CBF) second-order cone program.
If the control barrier function certificate is feasible, our approach leverages it to guarantee safety. Otherwise, our method explores the system dynamics to collect data and recover the feasibility of the control barrier function constraint.
arXiv Detail & Related papers (2023-11-03T14:23:57Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - Certifying Out-of-Domain Generalization for Blackbox Functions [20.997611019445657]
We propose a novel certification framework given bounded distance of mean and variance of two distributions.
We experimentally validate our certification method on a number of datasets, ranging from ImageNet.
arXiv Detail & Related papers (2022-02-03T16:47:50Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Adversarially Robust Stability Certificates can be Sample-Efficient [14.658040519472646]
We consider learning adversarially robust stability certificates for unknown nonlinear dynamical systems.
We show that the statistical cost of learning an adversarial stability certificate is equivalent, up to constant factors, to that of learning a nominal stability certificate.
arXiv Detail & Related papers (2021-12-20T17:23:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.