Human-Like Geometric Abstraction in Large Pre-trained Neural Networks
- URL: http://arxiv.org/abs/2402.04203v1
- Date: Tue, 6 Feb 2024 17:59:46 GMT
- Title: Human-Like Geometric Abstraction in Large Pre-trained Neural Networks
- Authors: Declan Campbell, Sreejan Kumar, Tyler Giallanza, Thomas L. Griffiths,
Jonathan D. Cohen
- Abstract summary: We revisit empirical results in cognitive science on geometric visual processing.
We identify three key biases in geometric visual processing.
We test tasks from the literature that probe these biases in humans and find that large pre-trained neural network models used in AI demonstrate more human-like abstract geometric processing.
- Score: 6.650735854030166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans possess a remarkable capacity to recognize and manipulate abstract
structure, which is especially apparent in the domain of geometry. Recent
research in cognitive science suggests neural networks do not share this
capacity, concluding that human geometric abilities come from discrete symbolic
structure in human mental representations. However, progress in artificial
intelligence (AI) suggests that neural networks begin to demonstrate more
human-like reasoning after scaling up standard architectures in both model size
and amount of training data. In this study, we revisit empirical results in
cognitive science on geometric visual processing and identify three key biases
in geometric visual processing: a sensitivity towards complexity, regularity,
and the perception of parts and relations. We test tasks from the literature
that probe these biases in humans and find that large pre-trained neural
network models used in AI demonstrate more human-like abstract geometric
processing.
Related papers
- Post-hoc and manifold explanations analysis of facial expression data based on deep learning [4.586134147113211]
This paper investigates how neural networks process and store facial expression data and associate these data with a range of psychological attributes produced by humans.
Researchers utilized deep learning model VGG16, demonstrating that neural networks can learn and reproduce key features of facial data.
The experimental results reveal the potential of deep learning models in understanding human emotions and cognitive processes.
arXiv Detail & Related papers (2024-04-29T01:19:17Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Brain-inspired Graph Spiking Neural Networks for Commonsense Knowledge
Representation and Reasoning [11.048601659933249]
How neural networks in the human brain represent commonsense knowledge is an important research topic in neuroscience, cognitive science, psychology, and artificial intelligence.
This work investigates how population encoding and spiking timing-dependent plasticity (STDP) mechanisms can be integrated into the learning of spiking neural networks.
The neuron populations of different communities together constitute the entire commonsense knowledge graph, forming a giant graph spiking neural network.
arXiv Detail & Related papers (2022-07-11T05:22:38Z) - Guiding Visual Attention in Deep Convolutional Neural Networks Based on
Human Eye Movements [0.0]
Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision.
Recent advances in deep learning seem to decrease this similarity.
We investigate a purely data-driven approach to obtain useful models.
arXiv Detail & Related papers (2022-06-21T17:59:23Z) - Interpretability of Neural Network With Physiological Mechanisms [5.1971653175509145]
Deep learning continues to play as a powerful state-of-art technique that has achieved extraordinary accuracy levels in various domains of regression and classification tasks.
The original goal of proposing the neural network model is to improve the understanding of complex human brains using a mathematical expression approach.
Recent deep learning techniques continue to lose the interpretations of its functional process by being treated mostly as a black-box approximator.
arXiv Detail & Related papers (2022-03-24T21:40:04Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Neural population geometry: An approach for understanding biological and
artificial neural networks [3.4809730725241605]
We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks.
Neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks.
arXiv Detail & Related papers (2021-04-14T18:10:34Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - DeepRetinotopy: Predicting the Functional Organization of Human Visual
Cortex from Structural MRI Data using Geometric Deep Learning [125.99533416395765]
We developed a deep learning model capable of exploiting the structure of the cortex to learn the complex relationship between brain function and anatomy from structural and functional MRI data.
Our model was able to predict the functional organization of human visual cortex from anatomical properties alone, and it was also able to predict nuanced variations across individuals.
arXiv Detail & Related papers (2020-05-26T04:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.