Application of hypercomplex number system in the dynamic network model
- URL: http://arxiv.org/abs/2108.02645v1
- Date: Sat, 31 Jul 2021 14:43:55 GMT
- Title: Application of hypercomplex number system in the dynamic network model
- Authors: Yuliia Boiarinova, Yakov Kalinovskiy, Dmitriy Lande
- Abstract summary: The paper proposes to use hypercomplex number systems, which allow you to model some network problems.
It is proposed to match the number of properties in each node and the measurability of a hypercomplex number system with the same number of basic elements.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In recent years, the direction of the study of networks in which connections
correspond to the mutual influences of nodes has been developed. Many works
have been devoted to the study of such complex networks, but most often they
relate to the spread of one type of activity (influence). In the process of
development of the newest technologies various mathematical models are
developed and investigated: models with thresholds, models of independent
cascades, models of distribution of epidemics, models of Markov processes.
The paper proposes to use hypercomplex number systems, which are a
mathematical apparatus that allows you to model some network problems and solve
them at a new level, ie to consider a complex network with several properties
in each node. In this paper, we consider networks where the edges correspond to
the mutual influences of the nodes. It is proposed to match the number of
properties in each node and the measurability of a hypercomplex number
system(HNS) with the same number of basic elements. Each HNS corresponds to the
Kelly table, which corresponds to the law of multiplication of these CSF. The
properties of the CSF allow you to build an isomorphic transition from a filled
Kelly table to a less filled one, which simplifies the calculation.
To model the problem using hypercomplex number systems, we offer a
specialized software package of hypercomplex computations based on the system
of computer algebra Maple. All this made it easy to model a complex system with
several effects.
Related papers
- Towards Explaining Hypercomplex Neural Networks [6.543091030789653]
Hypercomplex neural networks are gaining increasing interest in the deep learning community.
In this paper, we propose inherently interpretable PHNNs and quaternion-like networks.
We draw insights into how this unique branch of neural models operates.
arXiv Detail & Related papers (2024-03-26T17:58:07Z) - Dealing with Collinearity in Large-Scale Linear System Identification
Using Gaussian Regression [3.04585143845864]
We consider estimation of networks consisting of several interconnected dynamic systems.
We develop a strategy cast in a Bayesian regularization framework where any impulse response is seen as realization of a zero-mean Gaussian process.
We design a novel Markov chain Monte Carlo scheme able to reconstruct the impulse responses posterior by efficiently dealing with collinearity.
arXiv Detail & Related papers (2023-02-21T19:35:47Z) - Accelerated Solutions of Coupled Phase-Field Problems using Generative
Adversarial Networks [0.0]
We develop a new neural network based framework that uses encoder-decoder based conditional GeneLSTM layers to solve a system of Cahn-Hilliard microstructural equations.
We show that the trained models are mesh and scale-independent, thereby warranting application as effective neural operators.
arXiv Detail & Related papers (2022-11-22T08:32:22Z) - Scaling up the self-optimization model by means of on-the-fly
computation of weights [0.8057006406834467]
This work introduces a novel implementation of the Self-Optimization (SO) model that scales as $mathcalOleft(N2right)$ with respect to the number of nodes $N$.
Our on-the-fly computation paves the way for investigating substantially larger system sizes, allowing for more variety and complexity in future studies.
arXiv Detail & Related papers (2022-11-03T10:51:25Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Conditionally Parameterized, Discretization-Aware Neural Networks for
Mesh-Based Modeling of Physical Systems [0.0]
We generalize the idea of conditional parametrization -- using trainable functions of input parameters.
We show that conditionally parameterized networks provide superior performance compared to their traditional counterparts.
A network architecture named CP-GNet is also proposed as the first deep learning model capable of reacting standalone prediction of flows on meshes.
arXiv Detail & Related papers (2021-09-15T20:21:13Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.