Neural Continuous-Time Supermartingale Certificates
- URL: http://arxiv.org/abs/2412.17432v1
- Date: Mon, 23 Dec 2024 09:51:54 GMT
- Title: Neural Continuous-Time Supermartingale Certificates
- Authors: Grigory Neustroev, Mirco Giacobbe, Anna Lukina,
- Abstract summary: We introduce for the first time a neural-certificate framework for continuous-time dynamical systems.<n>Inspired by the success of training neural Lyapunov certificates for deterministic continuous-time systems, we propose a framework that bridges the gap between continuous-time and probabilistic neural certification.
- Score: 7.527234046228324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce for the first time a neural-certificate framework for continuous-time stochastic dynamical systems. Autonomous learning systems in the physical world demand continuous-time reasoning, yet existing learnable certificates for probabilistic verification assume discretization of the time continuum. Inspired by the success of training neural Lyapunov certificates for deterministic continuous-time systems and neural supermartingale certificates for stochastic discrete-time systems, we propose a framework that bridges the gap between continuous-time and probabilistic neural certification for dynamical systems under complex requirements. Our method combines machine learning and symbolic reasoning to produce formally certified bounds on the probabilities that a nonlinear system satisfies specifications of reachability, avoidance, and persistence. We present both the theoretical justification and the algorithmic implementation of our framework and showcase its efficacy on popular benchmarks.
Related papers
- Research Program: Theory of Learning in Dynamical Systems [29.121933501690805]
We argue that learnability in dynamical systems should be studied as a finite-sample question.<n>We focus on guarantees that hold uniformly at every time step after a finite burn-in period.<n>We show that accurate prediction can be achieved after finite observation without system identification.
arXiv Detail & Related papers (2025-12-22T14:05:31Z) - Certified Neural Approximations of Nonlinear Dynamics [52.79163248326912]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - VeRecycle: Reclaiming Guarantees from Probabilistic Certificates for Stochastic Dynamical Systems after Change [11.664652487964707]
Probabilistic neural Lyapunov certification is a powerful approach to proving safety of nonlinear dynamical systems.<n>VeRecycle is the first framework to formally reclaim guarantees for discrete-time dynamical systems.
arXiv Detail & Related papers (2025-05-20T06:54:19Z) - Neural Contraction Metrics with Formal Guarantees for Discrete-Time Nonlinear Dynamical Systems [17.905596843865705]
Contraction metrics provide a powerful framework for analyzing stability, robustness, and convergence of various dynamical systems.
However, identifying these metrics for complex nonlinear systems remains an open challenge due to the lack of effective tools.
This paper develops verifiable contraction metrics for discrete scalable nonlinear systems.
arXiv Detail & Related papers (2025-04-23T21:27:32Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)
We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.
Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Achieving Domain-Independent Certified Robustness via Knowledge Continuity [21.993471256103085]
We present knowledge continuity, a novel definition inspired by Lipschitz continuity.
Our proposed definition yields certification guarantees that depend only on the loss function and the intermediate learned metric spaces of the neural network.
We show that knowledge continuity can be used to localize vulnerable components of a neural network.
arXiv Detail & Related papers (2024-11-03T17:37:59Z) - Learning Unstable Continuous-Time Stochastic Linear Control Systems [0.0]
We study the problem of system identification for continuous-time dynamics, based on a single finite-length state trajectory.
We present a method for estimating the possibly unstable open-loop matrix by employing properly randomized control inputs.
We establish theoretical performance guarantees showing that the estimation error decays with trajectory length, a measure of excitability, and the signal-to-noise ratio.
arXiv Detail & Related papers (2024-09-17T16:24:51Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Continual Learning, Fast and Slow [75.53144246169346]
According to the Complementary Learning Systems theory, humans do effective emphcontinual learning through two complementary systems.
We propose emphDualNets (for Dual Networks), a general continual learning framework comprising a fast learning system for supervised learning of specific tasks and a slow learning system for representation learning of task-agnostic general representation via Self-Supervised Learning (SSL)
We demonstrate the promising results of DualNets on a wide range of continual learning protocols, ranging from the standard offline, task-aware setting to the challenging online, task-free scenario.
arXiv Detail & Related papers (2022-09-06T10:48:45Z) - Adversarially Robust Stability Certificates can be Sample-Efficient [14.658040519472646]
We consider learning adversarially robust stability certificates for unknown nonlinear dynamical systems.
We show that the statistical cost of learning an adversarial stability certificate is equivalent, up to constant factors, to that of learning a nominal stability certificate.
arXiv Detail & Related papers (2021-12-20T17:23:31Z) - A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability [7.963506386866862]
This paper presents a neural network model of an optimal contraction metric and corresponding differential Lyapunov function.
Its innovation lies in providing formal robustness guarantees for learning-based control frameworks.
arXiv Detail & Related papers (2021-10-02T00:28:49Z) - Port-Hamiltonian Neural Networks for Learning Explicit Time-Dependent
Dynamical Systems [2.6084034060847894]
Accurately learning the temporal behavior of dynamical systems requires models with well-chosen learning biases.
Recent innovations embed the Hamiltonian and Lagrangian formalisms into neural networks.
We show that the proposed emphport-Hamiltonian neural network can efficiently learn the dynamics of nonlinear physical systems of practical interest.
arXiv Detail & Related papers (2021-07-16T17:31:54Z) - Continual Competitive Memory: A Neural System for Online Task-Free
Lifelong Learning [91.3755431537592]
We propose a novel form of unsupervised learning, continual competitive memory ( CCM)
The resulting neural system is shown to offer an effective approach for combating catastrophic forgetting in online continual classification problems.
We demonstrate that the proposed CCM system not only outperforms other competitive learning neural models but also yields performance that is competitive with several modern, state-of-the-art lifelong learning approaches.
arXiv Detail & Related papers (2021-06-24T20:12:17Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.