Scaling up the self-optimization model by means of on-the-fly
computation of weights
- URL: http://arxiv.org/abs/2211.01698v1
- Date: Thu, 3 Nov 2022 10:51:25 GMT
- Title: Scaling up the self-optimization model by means of on-the-fly
computation of weights
- Authors: Natalya Weber, Werner Koch, Tom Froese
- Abstract summary: This work introduces a novel implementation of the Self-Optimization (SO) model that scales as $mathcalOleft(N2right)$ with respect to the number of nodes $N$.
Our on-the-fly computation paves the way for investigating substantially larger system sizes, allowing for more variety and complexity in future studies.
- Score: 0.8057006406834467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Self-Optimization (SO) model is a useful computational model for
investigating self-organization in "soft" Artificial life (ALife) as it has
been shown to be general enough to model various complex adaptive systems. So
far, existing work has been done on relatively small network sizes, precluding
the investigation of novel phenomena that might emerge from the complexity
arising from large numbers of nodes interacting in interconnected networks.
This work introduces a novel implementation of the SO model that scales as
$\mathcal{O}\left(N^{2}\right)$ with respect to the number of nodes $N$, and
demonstrates the applicability of the SO model to networks with system sizes
several orders of magnitude higher than previously was investigated. Removing
the prohibitive computational cost of the naive $\mathcal{O}\left(N^{3}\right)$
algorithm, our on-the-fly computation paves the way for investigating
substantially larger system sizes, allowing for more variety and complexity in
future studies.
Related papers
- Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics [21.55547541297847]
We study the problem of learning multi-index models in high-dimensions using a two-layer neural network trained with the mean-field Langevin algorithm.
Under mild distributional assumptions, we characterize the effective dimension $d_mathrmeff$ that controls both sample and computational complexity.
arXiv Detail & Related papers (2024-08-14T02:13:35Z) - Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators [78.64101336150419]
Predicting the long-term behavior of chaotic systems is crucial for various applications such as climate modeling.
An alternative approach to such a full-resolved simulation is using a coarse grid and then correcting its errors through a temporalittext model.
We propose an alternative end-to-end learning approach using a physics-informed neural operator (PINO) that overcomes this limitation.
arXiv Detail & Related papers (2024-08-09T17:05:45Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Universal Neural Functionals [67.80283995795985]
A challenging problem in many modern machine learning tasks is to process weight-space features.
Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks.
This work proposes an algorithm that automatically constructs permutation equivariant models for any weight space.
arXiv Detail & Related papers (2024-02-07T20:12:27Z) - Interfacing Finite Elements with Deep Neural Operators for Fast
Multiscale Modeling of Mechanics Problems [4.280301926296439]
In this work, we explore the idea of multiscale modeling with machine learning and employ DeepONet, a neural operator, as an efficient surrogate of the expensive solver.
DeepONet is trained offline using data acquired from the fine solver for learning the underlying and possibly unknown fine-scale dynamics.
We present various benchmarks to assess accuracy and speedup, and in particular we develop a coupling algorithm for a time-dependent problem.
arXiv Detail & Related papers (2022-02-25T20:46:08Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.