The Next Big Thing(s) in Unsupervised Machine Learning: Five Lessons
from Infant Learning
- URL: http://arxiv.org/abs/2009.08497v1
- Date: Thu, 17 Sep 2020 18:47:06 GMT
- Title: The Next Big Thing(s) in Unsupervised Machine Learning: Five Lessons
from Infant Learning
- Authors: Lorijn Zaadnoordijk, Tarek R. Besold, Rhodri Cusack
- Abstract summary: We argue that developmental science of infant cognition might hold the key to unlocking the next generation of unsupervised learning approaches.
Human infant learning is the closest biological parallel to artificial unsupervised learning, as infants too must learn useful representations from unlabelled data.
We identify five crucial factors enabling infants' quality and speed of learning, assess the extent to which these have already been exploited in machine learning, and propose how further adoption of these factors can give rise to previously unseen performance levels in unsupervised learning.
- Score: 2.9005223064604078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: After a surge in popularity of supervised Deep Learning, the desire to reduce
the dependence on curated, labelled data sets and to leverage the vast
quantities of unlabelled data available recently triggered renewed interest in
unsupervised learning algorithms. Despite a significantly improved performance
due to approaches such as the identification of disentangled latent
representations, contrastive learning, and clustering optimisations, the
performance of unsupervised machine learning still falls short of its
hypothesised potential. Machine learning has previously taken inspiration from
neuroscience and cognitive science with great success. However, this has mostly
been based on adult learners with access to labels and a vast amount of prior
knowledge. In order to push unsupervised machine learning forward, we argue
that developmental science of infant cognition might hold the key to unlocking
the next generation of unsupervised learning approaches. Conceptually, human
infant learning is the closest biological parallel to artificial unsupervised
learning, as infants too must learn useful representations from unlabelled
data. In contrast to machine learning, these new representations are learned
rapidly and from relatively few examples. Moreover, infants learn robust
representations that can be used flexibly and efficiently in a number of
different tasks and contexts. We identify five crucial factors enabling
infants' quality and speed of learning, assess the extent to which these have
already been exploited in machine learning, and propose how further adoption of
these factors can give rise to previously unseen performance levels in
unsupervised learning.
Related papers
- Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - A newborn embodied Turing test for view-invariant object recognition [0.0]
We present a "newborn embodied Turing Test" that allows newborn animals and machines to be raised in the same environments and tested with the same tasks.
To make this platform, we first collected controlled-rearing data from newborn chicks, then performed "digital twin" experiments in which machines were raised in virtual environments that mimicked the rearing conditions of the chicks.
arXiv Detail & Related papers (2023-06-08T22:46:31Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Activation Learning by Local Competitions [4.441866681085516]
We develop a biology-inspired learning rule that discovers features by local competitions among neurons.
It is demonstrated that the unsupervised features learned by this local learning rule can serve as a pre-training model.
arXiv Detail & Related papers (2022-09-26T10:43:29Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Understand and Improve Contrastive Learning Methods for Visual
Representation: A Review [1.4650545418986058]
A promising alternative, self-supervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling.
This literature review aims to provide an up-to-date analysis of the efforts of researchers to understand the key components and the limitations of self-supervised learning.
arXiv Detail & Related papers (2021-06-06T21:59:49Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Self-supervised learning through the eyes of a child [35.235974685889396]
We show the emergence of powerful, high-level visual representations from developmentally realistic natural videos using generic self-supervised learning objectives.
Our results demonstrate the emergence of powerful, high-level visual representations from developmentally realistic natural videos.
arXiv Detail & Related papers (2020-07-31T17:33:45Z) - A Developmental Neuro-Robotics Approach for Boosting the Recognition of
Handwritten Digits [91.3755431537592]
Recent evidence shows that a simulation of the children's embodied strategies can improve the machine intelligence too.
This article explores the application of embodied strategies to convolutional neural network models in the context of developmental neuro-robotics.
arXiv Detail & Related papers (2020-03-23T14:55:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.