Collective Learning by Ensembles of Altruistic Diversifying Neural
Networks
- URL: http://arxiv.org/abs/2006.11671v1
- Date: Sat, 20 Jun 2020 22:53:32 GMT
- Title: Collective Learning by Ensembles of Altruistic Diversifying Neural
Networks
- Authors: Benjamin Brazowski and Elad Schneidman
- Abstract summary: We present a model for co-learning by ensembles of interacting neural networks that aim to maximize their own performance but also their functional relations to other networks.
We show that ensembles of interacting networks outperform independent ones, and that optimal ensemble performance is reached when the coupling between networks increases diversity and degrades the performance of individual networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining the predictions of collections of neural networks often outperforms
the best single network. Such ensembles are typically trained independently,
and their superior `wisdom of the crowd' originates from the differences
between networks. Collective foraging and decision making in socially
interacting animal groups is often improved or even optimal thanks to local
information sharing between conspecifics. We therefore present a model for
co-learning by ensembles of interacting neural networks that aim to maximize
their own performance but also their functional relations to other networks. We
show that ensembles of interacting networks outperform independent ones, and
that optimal ensemble performance is reached when the coupling between networks
increases diversity and degrades the performance of individual networks. Thus,
even without a global goal for the ensemble, optimal collective behavior
emerges from local interactions between networks. We show the scaling of
optimal coupling strength with ensemble size, and that networks in these
ensembles specialize functionally and become more `confident' in their
assessments. Moreover, optimal co-learning networks differ structurally,
relying on sparser activity, a wider range of synaptic weights, and higher
firing rates - compared to independently trained networks. Finally, we explore
interactions-based co-learning as a framework for expanding and boosting
ensembles.
Related papers
- Neural Subnetwork Ensembles [2.44755919161855]
This dissertation introduces and formalizes a low-cost framework for constructing Subnetwork Ensembles.
Child networks are formed by sampling, perturbing, and optimizingworks from a trained parent model.
Our findings reveal that this approach can greatly improve training efficiency, parametric utilization, and generalization performance.
arXiv Detail & Related papers (2023-11-23T17:01:16Z) - Hierarchical Multi-Marginal Optimal Transport for Network Alignment [52.206006379563306]
Multi-network alignment is an essential prerequisite for joint learning on multiple networks.
We propose a hierarchical multi-marginal optimal transport framework named HOT for multi-network alignment.
Our proposed HOT achieves significant improvements over the state-of-the-art in both effectiveness and scalability.
arXiv Detail & Related papers (2023-10-06T02:35:35Z) - Joint Training of Deep Ensembles Fails Due to Learner Collusion [61.557412796012535]
Ensembles of machine learning models have been well established as a powerful method of improving performance over a single model.
Traditionally, ensembling algorithms train their base learners independently or sequentially with the goal of optimizing their joint performance.
We show that directly minimizing the loss of the ensemble appears to rarely be applied in practice.
arXiv Detail & Related papers (2023-01-26T18:58:07Z) - Reward-Sharing Relational Networks in Multi-Agent Reinforcement Learning
as a Framework for Emergent Behavior [0.0]
We integrate social' interactions into the MARL setup through a user-defined relational network.
We examine the effects of agent-agent relations on the rise of emergent behaviors.
arXiv Detail & Related papers (2022-07-12T23:27:42Z) - Learning distinct features helps, provably [98.78384185493624]
We study the diversity of the features learned by a two-layer neural network trained with the least squares loss.
We measure the diversity by the average $L$-distance between the hidden-layer features.
arXiv Detail & Related papers (2021-06-10T19:14:45Z) - The Impact of Network Connectivity on Collective Learning [1.370633147306388]
In decentralised autonomous systems it is the interactions between individual agents which govern the collective behaviours of the system.
In this paper we investigate the impact that the underlying network has on performance in the context of collective learning.
arXiv Detail & Related papers (2021-06-01T17:39:26Z) - Competing Adaptive Networks [56.56653763124104]
We develop an algorithm for decentralized competition among teams of adaptive agents.
We present an application in the decentralized training of generative adversarial neural networks.
arXiv Detail & Related papers (2021-03-29T14:42:15Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - A Comparative Study of Social Network Classifiers for Predicting Churn
in the Telecommunication Industry [8.592714155264613]
Networked learning has been shown to be effective in a number of studies.
These methods have been adapted to predict customer churn in telecommunication companies.
arXiv Detail & Related papers (2020-01-18T17:05:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.