Neural Relational Inference with Efficient Message Passing Mechanisms
- URL: http://arxiv.org/abs/2101.09486v1
- Date: Sat, 23 Jan 2021 11:27:31 GMT
- Title: Neural Relational Inference with Efficient Message Passing Mechanisms
- Authors: Siyuan Chen and Jiahai Wang and Guoqing Li
- Abstract summary: This paper introduces efficient message passing mechanisms to the graph neural networks with structural prior knowledge to address these problems.
A relation interaction mechanism is proposed to capture the coexistence of all relations and atemporal message passing mechanism is proposed to use historical information to alleviate error accumulation.
- Score: 10.329082213561785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many complex processes can be viewed as dynamical systems of interacting
agents. In many cases, only the state sequences of individual agents are
observed, while the interacting relations and the dynamical rules are unknown.
The neural relational inference (NRI) model adopts graph neural networks that
pass messages over a latent graph to jointly learn the relations and the
dynamics based on the observed data. However, NRI infers the relations
independently and suffers from error accumulation in multi-step prediction at
dynamics learning procedure. Besides, relation reconstruction without prior
knowledge becomes more difficult in more complex systems. This paper introduces
efficient message passing mechanisms to the graph neural networks with
structural prior knowledge to address these problems. A relation interaction
mechanism is proposed to capture the coexistence of all relations, and a
spatio-temporal message passing mechanism is proposed to use historical
information to alleviate error accumulation. Additionally, the structural prior
knowledge, symmetry as a special case, is introduced for better relation
prediction in more complex systems. The experimental results on simulated
physics systems show that the proposed method outperforms existing
state-of-the-art methods.
Related papers
- Online Multi-modal Root Cause Analysis [61.94987309148539]
Root Cause Analysis (RCA) is essential for pinpointing the root causes of failures in microservice systems.
Existing online RCA methods handle only single-modal data overlooking, complex interactions in multi-modal systems.
We introduce OCEAN, a novel online multi-modal causal structure learning method for root cause localization.
arXiv Detail & Related papers (2024-10-13T21:47:36Z) - Response Estimation and System Identification of Dynamical Systems via Physics-Informed Neural Networks [0.0]
This paper explores the use of Physics-Informed Neural Networks (PINNs) for the identification and estimation of dynamical systems.
PINNs offer a unique advantage by embedding known physical laws directly into the neural network's loss function, allowing for simple embedding of complex phenomena.
The results demonstrate that PINNs deliver an efficient tool across all aforementioned tasks, even in presence of modelling errors.
arXiv Detail & Related papers (2024-10-02T08:58:30Z) - Inferring the time-varying coupling of dynamical systems with temporal convolutional autoencoders [0.0]
We introduce Temporal Autoencoders for Causal Inference (TACI)
TACI combines a new surrogate data metric for assessing causal interactions with a novel two-headed machine learning architecture.
We demonstrate TACI's ability to accurately quantify dynamic causal interactions across a variety of systems.
arXiv Detail & Related papers (2024-06-05T12:51:20Z) - Inferring Relational Potentials in Interacting Systems [56.498417950856904]
We propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions.
NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed.
It allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting.
arXiv Detail & Related papers (2023-10-23T00:44:17Z) - Inferring dynamic regulatory interaction graphs from time series data
with perturbations [14.935318448625718]
We propose Regulatory Temporal Interaction Network Inference (RiTINI) for inferring time-varying interaction graphs in complex systems.
RiTINI uses a novel combination of space-and-time graph attentions and graph neural ordinary differential equations (ODEs)
We evaluate RiTINI's performance on various simulated and real-world datasets.
arXiv Detail & Related papers (2023-06-13T14:25:26Z) - GDBN: a Graph Neural Network Approach to Dynamic Bayesian Network [7.876789380671075]
We propose a graph neural network approach with score-based method aiming at learning a sparse DAG.
We demonstrate methods with graph neural network significantly outperformed other state-of-the-art methods with dynamic bayesian networking inference.
arXiv Detail & Related papers (2023-01-28T02:49:13Z) - Learning Heterogeneous Interaction Strengths by Trajectory Prediction
with Graph Neural Network [0.0]
We propose the attentive relational inference network (RAIN) to infer continuously weighted interaction graphs without any ground-truth interaction strengths.
We show that our RAIN model with the PA mechanism accurately infers continuous interaction strengths for simulated physical systems in an unsupervised manner.
arXiv Detail & Related papers (2022-08-28T09:13:33Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.