A synchronization-capturing multi-scale solver to the noisy
integrate-and-fire neuron networks
- URL: http://arxiv.org/abs/2305.05915v1
- Date: Wed, 10 May 2023 05:58:57 GMT
- Title: A synchronization-capturing multi-scale solver to the noisy
integrate-and-fire neuron networks
- Authors: Ziyu Du, Yantong Xie and Zhennan Zhou
- Abstract summary: The noisy integrate-and-fire (NLIF) model describes the voltage configurations of neuron networks with an interacting many-particles system at a microscopic level.
The macroscopic model fails to yield valid results when simulating considerably synchronous networks with active firing events.
We propose a multi-scale solver for the NLIF networks, which inherits the low cost of the macroscopic solver and the high reliability of the microscopic solver.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The noisy leaky integrate-and-fire (NLIF) model describes the voltage
configurations of neuron networks with an interacting many-particles system at
a microscopic level. When simulating neuron networks of large sizes, computing
a coarse-grained mean-field Fokker-Planck equation solving the voltage
densities of the networks at a macroscopic level practically serves as a
feasible alternative in its high efficiency and credible accuracy. However, the
macroscopic model fails to yield valid results of the networks when simulating
considerably synchronous networks with active firing events. In this paper, we
propose a multi-scale solver for the NLIF networks, which inherits the low cost
of the macroscopic solver and the high reliability of the microscopic solver.
For each temporal step, the multi-scale solver uses the macroscopic solver when
the firing rate of the simulated network is low, while it switches to the
microscopic solver when the firing rate tends to blow up. Moreover, the
macroscopic and microscopic solvers are integrated with a high-precision
switching algorithm to ensure the accuracy of the multi-scale solver. The
validity of the multi-scale solver is analyzed from two perspectives: firstly,
we provide practically sufficient conditions that guarantee the mean-field
approximation of the macroscopic model and present rigorous numerical analysis
on simulation errors when coupling the two solvers; secondly, the numerical
performance of the multi-scale solver is validated through simulating several
large neuron networks, including networks with either instantaneous or periodic
input currents which prompt active firing events over a period of time.
Related papers
- Enhancing Multiscale Simulations with Constitutive Relations-Aware Deep Operator Networks [0.7946947383637114]
Multiscale finite element computations are commended for their ability to integrate micro-structural properties into macroscopic computational analyses.
We propose a hybrid method in which we utilize deep operator networks for surrogate modeling of the microscale physics.
arXiv Detail & Related papers (2024-05-22T15:40:05Z) - Learning-based Multi-continuum Model for Multiscale Flow Problems [24.93423649301792]
We propose a learning-based multi-continuum model to enrich the homogenized equation and improve the accuracy of the single model for multiscale problems.
Our proposed learning-based multi-continuum model can resolve multiple interacted media within each coarse grid block and describe the mass transfer among them.
arXiv Detail & Related papers (2024-03-21T02:30:56Z) - Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design [68.86220939532373]
The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
arXiv Detail & Related papers (2022-07-19T16:37:24Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Interfacing Finite Elements with Deep Neural Operators for Fast
Multiscale Modeling of Mechanics Problems [4.280301926296439]
In this work, we explore the idea of multiscale modeling with machine learning and employ DeepONet, a neural operator, as an efficient surrogate of the expensive solver.
DeepONet is trained offline using data acquired from the fine solver for learning the underlying and possibly unknown fine-scale dynamics.
We present various benchmarks to assess accuracy and speedup, and in particular we develop a coupling algorithm for a time-dependent problem.
arXiv Detail & Related papers (2022-02-25T20:46:08Z) - Using neural networks to solve the 2D Poisson equation for electric
field computation in plasma fluid simulations [0.0]
The Poisson equation is critical to get a self-consistent solution in plasma fluid simulations used for Hall effect thrusters and streamers discharges.
solving the 2D Poisson equation with zero Dirichlet boundary conditions using a deep neural network is investigated.
A CNN is built to solve the same Poisson equation but in cylindrical coordinates.
Results reveal good CNN predictions with significant speedup.
arXiv Detail & Related papers (2021-09-27T14:25:10Z) - Performance and accuracy assessments of an incompressible fluid solver
coupled with a deep Convolutional Neural Network [0.0]
The resolution of the Poisson equation is usually one of the most computationally intensive steps for incompressible fluid solvers.
CNN has been introduced to solve this equation, leading to significant inference time reduction.
A hybrid strategy is developed, which couples a CNN with a traditional iterative solver to ensure a user-defined accuracy level.
arXiv Detail & Related papers (2021-09-20T08:30:29Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.