Neural NID Rules
- URL: http://arxiv.org/abs/2202.06036v1
- Date: Sat, 12 Feb 2022 10:47:06 GMT
- Title: Neural NID Rules
- Authors: Luca Viano and Johanni Brea
- Abstract summary: We introduce Neural NID, a method that learns abstract object properties and relations between objects with a suitably regularized graph neural network.
We validate the greater generalization capability of Neural NID on simple benchmarks specifically designed to assess the transition dynamics learned by the model.
- Score: 2.1030878979833467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Abstract object properties and their relations are deeply rooted in human
common sense, allowing people to predict the dynamics of the world even in
situations that are novel but governed by familiar laws of physics. Standard
machine learning models in model-based reinforcement learning are inadequate to
generalize in this way. Inspired by the classic framework of noisy
indeterministic deictic (NID) rules, we introduce here Neural NID, a method
that learns abstract object properties and relations between objects with a
suitably regularized graph neural network. We validate the greater
generalization capability of Neural NID on simple benchmarks specifically
designed to assess the transition dynamics learned by the model.
Related papers
- Predicting and explaining nonlinear material response using deep
Physically Guided Neural Networks with Internal Variables [0.0]
We use the concept of Physically Guided Neural Networks with Internal Variables (PGNNIV) to discover laws.
PGNNIVs make a particular use of the physics of the problem to enforce constraints on specific hidden layers.
We demonstrate that PGNNIVs are capable of predicting both internal and external variables under unseen load scenarios.
arXiv Detail & Related papers (2023-08-07T21:20:24Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness? [0.0]
We study adversarial examples of trained neural networks through analytical tools afforded by recent theory advances connecting neural networks and kernel methods.
We show how NTKs allow to generate adversarial examples in a training-free'' fashion, and demonstrate that they transfer to fool their finite-width neural net counterparts in the lazy'' regime.
arXiv Detail & Related papers (2022-10-11T16:11:48Z) - Physics Embedded Neural Network Vehicle Model and Applications in
Risk-Aware Autonomous Driving Using Latent Features [6.33280703577189]
Non-holonomic vehicle motion has been studied extensively using physics-based models.
In this paper, we seamlessly combine deep learning with a fully differentiable physics model to endow the neural network with available prior knowledge.
arXiv Detail & Related papers (2022-07-16T12:06:55Z) - Standalone Neural ODEs with Sensitivity Analysis [5.565364597145569]
This paper presents a continuous-depth neural ODE model capable of describing a full deep neural network.
We present a general formulation of the neural sensitivity problem and show how it is used in the NCG training.
Our evaluations demonstrate that our novel formulations lead to increased robustness and performance as compared to ResNet models.
arXiv Detail & Related papers (2022-05-27T12:16:53Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Model-Based Robust Deep Learning: Generalizing to Natural,
Out-of-Distribution Data [104.69689574851724]
We propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning.
Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data.
arXiv Detail & Related papers (2020-05-20T13:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.