Simulating Structural Plasticity of the Brain more Scalable than
Expected
- URL: http://arxiv.org/abs/2210.05267v1
- Date: Tue, 11 Oct 2022 09:02:43 GMT
- Title: Simulating Structural Plasticity of the Brain more Scalable than
Expected
- Authors: Fabian Czappa, Alexander Gei{\ss} and Felix Wolf
- Abstract summary: Rinke et al. introduced a scalable algorithm that simulates structural plasticity for up to one billion neurons on current hardware using a variant of the Barnes-Hut algorithm.
We show that with careful consideration of the algorithm, the theoretical runtime can even be classified as $O(n log2 n)$.
- Score: 69.39201743340747
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Structural plasticity of the brain describes the creation of new and the
deletion of old synapses over time. Rinke et al. (JPDC 2018) introduced a
scalable algorithm that simulates structural plasticity for up to one billion
neurons on current hardware using a variant of the Barnes--Hut algorithm. They
demonstrate good scalability and prove a runtime complexity of $O(n \log^2 n)$.
In this comment paper, we show that with careful consideration of the
algorithm, the theoretical runtime can even be classified as $O(n \log n)$.
Related papers
- On the Complexity of Neural Computation in Superposition [3.9803704378699103]
Recent advances in the understanding of neural networks suggest that superposition is a key mechanism underlying the computational efficiency of large-scale networks.
We show a nearly tight upper bound: logical operations like pairwise AND can be computed using $O(sqrtm' log m')$ neurons and $O(m' log2 m')$ parameters.
Our results open a path for using complexity theoretic techniques in neural network interpretability research.
arXiv Detail & Related papers (2024-09-05T18:58:59Z) - An Improved Finite-time Analysis of Temporal Difference Learning with Deep Neural Networks [11.925232472331494]
We develop an improved non-asymptotic analysis of the neural TD method with a general $L$-layer neural network.
New proof techniques are developed and an improved new $tildemathcalO(epsilon-1)$ sample complexity is derived.
arXiv Detail & Related papers (2024-05-07T05:29:55Z) - Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent [83.85536329832722]
We show that gradient descent (SGD) can efficiently solve the $k$-parity problem on a $d$dimensional hypercube.
We then demonstrate how a trained neural network with SGD, solving the $k$-parity problem with small statistical errors.
arXiv Detail & Related papers (2024-04-18T17:57:53Z) - Scalable network reconstruction in subquadratic time [0.0]
We present a general algorithm applicable to a broad range of reconstruction problems that significantly outperforms this quadratic baseline.
Our algorithm relies on a second neighbor search that produces the best edge candidates with high probability.
We show that our algorithm achieves a performance that is many orders of magnitude faster than the quadratic baseline.
arXiv Detail & Related papers (2024-01-02T19:00:13Z) - End-to-end complexity for simulating the Schwinger model on quantum computers [0.6449786007855248]
We propose an efficient implementation of block-encoding of the Schwinger model Hamiltonian.
As an end-to-end application, we compute the vacuum persistence amplitude.
Our results provide insights into predictions about the performance of quantum computers in the FTQC era.
arXiv Detail & Related papers (2023-11-29T06:36:11Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Unbiased random circuit compiler for time-dependent Hamiltonian
simulation [8.694056486825318]
Time-dependent Hamiltonian simulation is a critical task in quantum computing.
We develop an unbiased random compiler for TDHS.
We perform numerical simulations for a spin model under the interaction picture and the adiabatic ground state preparation for molecular systems.
arXiv Detail & Related papers (2022-12-19T13:40:05Z) - Transformers Learn Shortcuts to Automata [52.015990420075944]
We find that a low-depth Transformer can represent the computations of any finite-state automaton.
We show that a Transformer with $O(log T)$ layers can exactly replicate the computation of an automaton on an input sequence of length $T$.
We further investigate the brittleness of these solutions and propose potential mitigations.
arXiv Detail & Related papers (2022-10-19T17:45:48Z) - Mixability made efficient: Fast online multiclass logistic regression [68.8204255655161]
We show that mixability can be a powerful tool to obtain algorithms with optimal regret.
The resulting methods often suffer from high computational complexity which has reduced their practical applicability.
arXiv Detail & Related papers (2021-10-08T08:22:05Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.