Guaranteed Conformance of Neurosymbolic Models to Natural Constraints
- URL: http://arxiv.org/abs/2212.01346v8
- Date: Tue, 7 Nov 2023 14:05:10 GMT
- Title: Guaranteed Conformance of Neurosymbolic Models to Natural Constraints
- Authors: Kaustubh Sridhar, Souradeep Dutta, James Weimer, Insup Lee
- Abstract summary: In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences.
We propose a method to guarantee this conformance.
We experimentally show that our constrained neurosymbolic models conform to specified models.
- Score: 4.598757178874836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have emerged as the workhorse for a large section of
robotics and control applications, especially as models for dynamical systems.
Such data-driven models are in turn used for designing and verifying autonomous
systems. They are particularly useful in modeling medical systems where data
can be leveraged to individualize treatment. In safety-critical applications,
it is important that the data-driven model is conformant to established
knowledge from the natural sciences. Such knowledge is often available or can
often be distilled into a (possibly black-box) model. For instance, an F1
racing car should conform to Newton's laws (which are encoded within a unicycle
model). In this light, we consider the following problem - given a model $M$
and a state transition dataset, we wish to best approximate the system model
while being a bounded distance away from $M$. We propose a method to guarantee
this conformance. Our first step is to distill the dataset into a few
representative samples called memories, using the idea of a growing neural gas.
Next, using these memories we partition the state space into disjoint subsets
and compute bounds that should be respected by the neural network in each
subset. This serves as a symbolic wrapper for guaranteed conformance. We argue
theoretically that this only leads to a bounded increase in approximation
error; which can be controlled by increasing the number of memories. We
experimentally show that on three case studies (Car Model, Drones, and
Artificial Pancreas), our constrained neurosymbolic models conform to specified
models (each encoding various constraints) with order-of-magnitude improvements
compared to the augmented Lagrangian and vanilla training methods. Our code can
be found at: https://github.com/kaustubhsridhar/Constrained_Models
Related papers
- Harmony in Diversity: Merging Neural Networks with Canonical Correlation Analysis [17.989809995141044]
We propose CCA Merge, which is based on Corre Analysis Analysis.
We show that CCA works significantly better than past methods when more than 2 models are merged.
arXiv Detail & Related papers (2024-07-07T14:21:04Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Granger Causality using Neural Networks [8.835231777363399]
We present several new classes of models that can handle underlying non-linearity.
We show one can directly decouple lags and individual time series importance via decoupled penalties.
We also show one can directly decouple lags and individual time series importance via decoupled penalties.
arXiv Detail & Related papers (2022-08-07T12:02:48Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - GAN Cocktail: mixing GANs without dataset access [18.664733153082146]
We tackle the problem of model merging, given two constraints that often come up in the real world.
In the first stage, we transform the weights of all the models to the same parameter space by a technique we term model rooting.
In the second stage, we merge the rooted models by averaging their weights and fine-tuning them for each specific domain, using only data generated by the original trained models.
arXiv Detail & Related papers (2021-06-07T17:59:04Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Learning physically consistent mathematical models from data using group
sparsity [2.580765958706854]
In areas like biology, high noise levels, sensor-induced correlations, and strong inter-system variability can render data-driven models nonsensical or physically inconsistent.
We show several applications from systems biology that demonstrate the benefits of enforcing $textitpriors$ in data-driven modeling.
arXiv Detail & Related papers (2020-12-11T14:45:38Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.