FI-ODE: Certifiably Robust Forward Invariance in Neural ODEs
- URL: http://arxiv.org/abs/2210.16940v4
- Date: Fri, 22 Dec 2023 06:43:24 GMT
- Title: FI-ODE: Certifiably Robust Forward Invariance in Neural ODEs
- Authors: Yujia Huang, Ivan Dario Jimenez Rodriguez, Huan Zhang, Yuanyuan Shi,
Yisong Yue
- Abstract summary: We propose a general framework for training and provably certifying robust forward invariance in Neural ODEs.
We apply this framework to provide certified safety in robust continuous control.
In addition, we explore the generality of our framework by using it to certify adversarial robustness for image classification.
- Score: 34.762005448725226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forward invariance is a long-studied property in control theory that is used
to certify that a dynamical system stays within some pre-specified set of
states for all time, and also admits robustness guarantees (e.g., the
certificate holds under perturbations). We propose a general framework for
training and provably certifying robust forward invariance in Neural ODEs. We
apply this framework to provide certified safety in robust continuous control.
To our knowledge, this is the first instance of training Neural ODE policies
with such non-vacuous certified guarantees. In addition, we explore the
generality of our framework by using it to certify adversarial robustness for
image classification.
Related papers
- FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks [62.897993591443594]
FullCert is the first end-to-end certifier with sound, deterministic bounds.
We experimentally demonstrate FullCert's feasibility on two datasets.
arXiv Detail & Related papers (2024-06-17T13:23:52Z) - Transfer of Safety Controllers Through Learning Deep Inverse Dynamics Model [4.7962647777554634]
Control barrier certificates have proven effective in formally guaranteeing the safety of the control systems.
Design of a control barrier certificate is a time-consuming and computationally expensive endeavor.
We propose a validity condition that, when met, guarantees correctness of the controller.
arXiv Detail & Related papers (2024-05-22T15:28:43Z) - Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing [87.48628403354351]
certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions.
Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty.
We propose a novel, more practical setting, which certifies pixels within a multi-level hierarchy, and adaptively relaxes the certification to a coarser level for unstable components.
arXiv Detail & Related papers (2024-02-13T11:59:43Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - CROP: Certifying Robust Policies for Reinforcement Learning through
Functional Smoothing [41.093241772796475]
We present the first framework of Certifying Robust Policies for reinforcement learning (CROP) against adversarial state perturbations.
We propose two types of robustness certification criteria: robustness of per-state actions and lower bound of cumulative rewards.
arXiv Detail & Related papers (2021-06-17T07:58:32Z) - Bayesian Inference with Certifiable Adversarial Robustness [25.40092314648194]
We consider adversarial training networks through the lens of Bayesian learning.
We present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.
Our method is the first to directly train certifiable BNNs, thus facilitating their use in safety-critical applications.
arXiv Detail & Related papers (2021-02-10T07:17:49Z) - Certified Distributional Robustness on Smoothed Classifiers [27.006844966157317]
We propose the worst-case adversarial loss over input distributions as a robustness certificate.
By exploiting duality and the smoothness property, we provide an easy-to-compute upper bound as a surrogate for the certificate.
arXiv Detail & Related papers (2020-10-21T13:22:25Z) - Generalisation Guarantees for Continual Learning with Orthogonal
Gradient Descent [81.29979864862081]
In Continual Learning settings, deep neural networks are prone to Catastrophic Forgetting.
We present a theoretical framework to study Continual Learning algorithms in the Neural Tangent Kernel regime.
arXiv Detail & Related papers (2020-06-21T23:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.