Collective Intelligence for Deep Learning: A Survey of Recent
Developments
- URL: http://arxiv.org/abs/2111.14377v3
- Date: Thu, 10 Mar 2022 14:25:15 GMT
- Title: Collective Intelligence for Deep Learning: A Survey of Recent
Developments
- Authors: David Ha, Yujin Tang
- Abstract summary: We will provide a historical context of neural network research's involvement with complex systems.
We will highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence.
- Score: 11.247894240593691
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the past decade, we have witnessed the rise of deep learning to dominate
the field of artificial intelligence. Advances in artificial neural networks
alongside corresponding advances in hardware accelerators with large memory
capacity, together with the availability of large datasets enabled
practitioners to train and deploy sophisticated neural network models that
achieve state-of-the-art performance on tasks across several fields spanning
computer vision, natural language processing, and reinforcement learning.
However, as these neural networks become bigger, more complex, and more widely
used, fundamental problems with current deep learning models become more
apparent. State-of-the-art deep learning models are known to suffer from issues
that range from poor robustness, inability to adapt to novel task settings, to
requiring rigid and inflexible configuration assumptions. Collective behavior,
commonly observed in nature, tends to produce systems that are robust,
adaptable, and have less rigid assumptions about the environment configuration.
Collective intelligence, as a field, studies the group intelligence that
emerges from the interactions of many individuals. Within this field, ideas
such as self-organization, emergent behavior, swarm optimization, and cellular
automata were developed to model and explain complex systems. It is therefore
natural to see these ideas incorporated into newer deep learning methods. In
this review, we will provide a historical context of neural network research's
involvement with complex systems, and highlight several active areas in modern
deep learning research that incorporate the principles of collective
intelligence to advance its current capabilities. We hope this review can serve
as a bridge between the complex systems and deep learning communities.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - A Survey on State-of-the-art Deep Learning Applications and Challenges [0.0]
Building a deep learning model is challenging due to the algorithm's complexity and the dynamic nature of real-world problems.
This study aims to comprehensively review the state-of-the-art deep learning models in computer vision, natural language processing, time series analysis and pervasive computing.
arXiv Detail & Related papers (2024-03-26T10:10:53Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Neuronal Auditory Machine Intelligence (NEURO-AMI) In Perspective [0.0]
We present an overview of a new competing bio-inspired continual learning neural tool Neuronal Auditory Machine Intelligence (Neuro-AMI)
In this report, we present an overview of a new competing bio-inspired continual learning neural tool Neuronal Auditory Machine Intelligence (Neuro-AMI)
arXiv Detail & Related papers (2023-10-14T13:17:58Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - A brain basis of dynamical intelligence for AI and computational
neuroscience [0.0]
More brain-like capacities may demand new theories, models, and methods for designing artificial learning systems.
This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
arXiv Detail & Related papers (2021-05-15T19:49:32Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Neurosymbolic AI for Situated Language Understanding [13.249453757295083]
We argue that computational situated grounding provides a solution to some of these learning challenges.
Our model reincorporates some ideas of classic AI into a framework of neurosymbolic intelligence.
We discuss how situated grounding provides diverse data and multiple levels of modeling for a variety of AI learning challenges.
arXiv Detail & Related papers (2020-12-05T05:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.