On the Forward Invariance of Neural ODEs
- URL: http://arxiv.org/abs/2210.04763v2
- Date: Wed, 31 May 2023 16:03:26 GMT
- Title: On the Forward Invariance of Neural ODEs
- Authors: Wei Xiao and Tsun-Hsuan Wang and Ramin Hasani and Mathias Lechner and
Yutong Ban and Chuang Gan and Daniela Rus
- Abstract summary: We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications.
Our approach uses a class of control barrier functions to transform output specifications into constraints on the parameters and inputs of the learning system.
- Score: 92.07281135902922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new method to ensure neural ordinary differential equations
(ODEs) satisfy output specifications by using invariance set propagation. Our
approach uses a class of control barrier functions to transform output
specifications into constraints on the parameters and inputs of the learning
system. This setup allows us to achieve output specification guarantees simply
by changing the constrained parameters/inputs both during training and
inference. Moreover, we demonstrate that our invariance set propagation through
data-controlled neural ODEs not only maintains generalization performance but
also creates an additional degree of robustness by enabling causal manipulation
of the system's parameters/inputs. We test our method on a series of
representation learning tasks, including modeling physical dynamics and
convexity portraits, as well as safe collision avoidance for autonomous
vehicles.
Related papers
- Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset [98.52916361979503]
We introduce a novel learning approach that automatically models and adapts to non-stationarity.
We show empirically that our approach performs well in non-stationary supervised and off-policy reinforcement learning settings.
arXiv Detail & Related papers (2024-11-06T16:32:40Z) - Statistical learning for constrained functional parameters in infinite-dimensional models with applications in fair machine learning [4.974815773537217]
We study the general problem of constrained statistical machine learning through a statistical functional lens.
We characterize the constrained functional parameter as the minimizer of a penalized risk criterion using a Lagrange multiplier formulation.
Our results suggest natural estimators of the constrained parameter that can be constructed by combining estimates of unconstrained parameters.
arXiv Detail & Related papers (2024-04-15T14:59:21Z) - Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
Pre-trained Language Models [109.06052781040916]
We introduce a technique to enhance the inference efficiency of parameter-shared language models.
We also propose a simple pre-training technique that leads to fully or partially shared models.
Results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs.
arXiv Detail & Related papers (2023-10-19T15:13:58Z) - Variational Autoencoding Neural Operators [17.812064311297117]
Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems.
We present Variational Autoencoding Neural Operators (VANO), a general strategy for making a large class of operator learning architectures act as variational autoencoders.
arXiv Detail & Related papers (2023-02-20T22:34:43Z) - InteL-VAEs: Adding Inductive Biases to Variational Auto-Encoders via
Intermediary Latents [60.785317191131284]
We introduce a simple and effective method for learning VAEs with controllable biases by using an intermediary set of latent variables.
In particular, it allows us to impose desired properties like sparsity or clustering on learned representations.
We show that this, in turn, allows InteL-VAEs to learn both better generative models and representations.
arXiv Detail & Related papers (2021-06-25T16:34:05Z) - Variational Inference MPC using Tsallis Divergence [10.013572514839082]
We provide a framework for Variational Inference-Stochastic Optimal Control by using thenon-extensive Tsallis divergence.
A novel Tsallis Variational Inference-Model Predictive Control algorithm is derived.
arXiv Detail & Related papers (2021-04-01T04:00:49Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Constrained Neural Ordinary Differential Equations with Stability
Guarantees [1.1086440815804224]
We show how to model discrete ordinary differential equations with algebraic nonlinearities as deep neural networks.
We derive the stability guarantees of the network layers based on the implicit constraints imposed on the weight's eigenvalues.
We demonstrate the prediction accuracy of learned neural ODEs evaluated on open-loop simulations.
arXiv Detail & Related papers (2020-04-22T22:07:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.