Towards evolution of Deep Neural Networks through contrastive Self-Supervised learning
- URL: http://arxiv.org/abs/2406.14525v1
- Date: Thu, 20 Jun 2024 17:38:16 GMT
- Title: Towards evolution of Deep Neural Networks through contrastive Self-Supervised learning
- Authors: Adriano Vinhas, João Correia, Penousal Machado,
- Abstract summary: We propose a framework that is able to evolve deep neural networks using self-supervised learning.
Our results show that it is possible to evolve adequate neural networks while reducing the reliance on labelled data.
- Score: 0.49157446832511503
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep Neural Networks (DNNs) have been successfully applied to a wide range of problems. However, two main limitations are commonly pointed out. The first one is that they require long time to design. The other is that they heavily rely on labelled data, which can sometimes be costly and hard to obtain. In order to address the first problem, neuroevolution has been proved to be a plausible option to automate the design of DNNs. As for the second problem, self-supervised learning has been used to leverage unlabelled data to learn representations. Our goal is to study how neuroevolution can help self-supervised learning to bridge the gap to supervised learning in terms of performance. In this work, we propose a framework that is able to evolve deep neural networks using self-supervised learning. Our results on the CIFAR-10 dataset show that it is possible to evolve adequate neural networks while reducing the reliance on labelled data. Moreover, an analysis to the structure of the evolved networks suggests that the amount of labelled data fed to them has less effect on the structure of networks that learned via self-supervised learning, when compared to individuals that relied on supervised learning.
Related papers
- Breaching the Bottleneck: Evolutionary Transition from Reward-Driven Learning to Reward-Agnostic Domain-Adapted Learning in Neuromodulated Neural Nets [0.3428444467046466]
AI learning algorithms rely on explicit externally provided measures of behaviour quality to acquire fit behaviour.
This imposes an information bottleneck that precludes learning from diverse non-reward stimulus information.
We propose that species first evolve the ability to learn from reward signals, providing inefficient (bottlenecked) but broad adaptivity.
arXiv Detail & Related papers (2024-04-19T05:14:47Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Curriculum Design Helps Spiking Neural Networks to Classify Time Series [16.402675046686834]
Spiking Neural Networks (SNNs) have a greater potential for modeling time series data than Artificial Neural Networks (ANNs)
In this work, enlighten by brain-inspired science, we find that, not only the structure but also the learning process should be human-like.
arXiv Detail & Related papers (2023-12-26T02:04:53Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Models Developed for Spiking Neural Networks [0.5801044612920815]
Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain.
In this work, we reviewed the structures and performances of SNNs on image classification tasks.
The comparisons illustrate that these networks show great capabilities for more complicated problems.
arXiv Detail & Related papers (2022-12-08T16:18:53Z) - Making a Spiking Net Work: Robust brain-like unsupervised machine
learning [0.0]
Spiking Neural Networks (SNNs) are an alternative to Artificial Neural Networks (ANNs)
SNNs struggle with dynamical stability and cannot match the accuracy of ANNs.
We show how an SNN can overcome many of the shortcomings that have been identified in the literature.
arXiv Detail & Related papers (2022-08-02T02:10:00Z) - An Unsupervised STDP-based Spiking Neural Network Inspired By
Biologically Plausible Learning Rules and Connections [10.188771327458651]
Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly.
We design an adaptive synaptic filter and introduce the adaptive spiking threshold to enrich the representation ability of SNNs.
Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-07-06T14:53:32Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.