A neural network walks into a lab: towards using deep nets as models for
human behavior
- URL: http://arxiv.org/abs/2005.02181v1
- Date: Sat, 2 May 2020 11:17:36 GMT
- Title: A neural network walks into a lab: towards using deep nets as models for
human behavior
- Authors: Wei Ji Ma and Benjamin Peters
- Abstract summary: We argue why deep neural network models have the potential to be interesting models of human behavior.
We discuss how that potential can be more fully realized.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What might sound like the beginning of a joke has become an attractive
prospect for many cognitive scientists: the use of deep neural network models
(DNNs) as models of human behavior in perceptual and cognitive tasks. Although
DNNs have taken over machine learning, attempts to use them as models of human
behavior are still in the early stages. Can they become a versatile model class
in the cognitive scientist's toolbox? We first argue why DNNs have the
potential to be interesting models of human behavior. We then discuss how that
potential can be more fully realized. On the one hand, we argue that the cycle
of training, testing, and revising DNNs needs to be revisited through the lens
of the cognitive scientist's goals. Specifically, we argue that methods for
assessing the goodness of fit between DNN models and human behavior have to
date been impoverished. On the other hand, cognitive science might have to
start using more complex tasks (including richer stimulus spaces), but doing so
might be beneficial for DNN-independent reasons as well. Finally, we highlight
avenues where traditional cognitive process models and DNNs may show productive
synergy.
Related papers
- Performance-optimized deep neural networks are evolving into worse
models of inferotemporal visual cortex [8.45100792118802]
We show that object recognition accuracy of deep neural networks (DNNs) correlates with their ability to predict neural responses to natural images in the inferotemporal (IT) cortex.
Our results suggest that harmonized DNNs break the trade-off between ImageNet accuracy and neural prediction accuracy.
arXiv Detail & Related papers (2023-06-06T15:34:45Z) - Are Deep Neural Networks Adequate Behavioural Models of Human Visual
Perception? [8.370048099732573]
Deep neural networks (DNNs) are machine learning algorithms that have revolutionised computer vision.
We argue that it is important to distinguish between statistical tools and computational models.
We dispel a number of myths surrounding DNNs in vision science.
arXiv Detail & Related papers (2023-05-26T15:31:06Z) - Is it conceivable that neurogenesis, neural Darwinism, and species
evolution could all serve as inspiration for the creation of evolutionary
deep neural networks? [0.0]
Deep Neural Networks (DNNs) are built using artificial neural networks.
This paper emphasizes the importance of what we call two-dimensional brain evolution.
We also highlight the connection between the dropout method which is widely-used in regularizing DNNs and neurogenesis of the brain.
arXiv Detail & Related papers (2023-04-06T14:51:20Z) - Models Developed for Spiking Neural Networks [0.5801044612920815]
Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain.
In this work, we reviewed the structures and performances of SNNs on image classification tasks.
The comparisons illustrate that these networks show great capabilities for more complicated problems.
arXiv Detail & Related papers (2022-12-08T16:18:53Z) - Harmonizing the object recognition strategies of deep neural networks
with humans [10.495114898741205]
We show that state-of-the-art deep neural networks (DNNs) are becoming less aligned with humans as their accuracy improves.
Our work represents the first demonstration that the scaling laws that are guiding the design of DNNs today have also produced worse models of human vision.
arXiv Detail & Related papers (2022-11-08T20:03:49Z) - Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time
Planetary Explorations [58.720142291102135]
Deep learning (DL) has proven to be an effective machine learning and computer vision technique.
Most of the Deep Neural Network (DNN) architectures are so complex that they are considered a 'black box'
In this paper, we used integrated gradients to describe the attributions of each neuron to the output classes.
It provides a set of explainability tools (ET) that opens the black box of a DNN so that the individual contribution of neurons to category classification can be ranked and visualized.
arXiv Detail & Related papers (2022-01-15T07:10:00Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space [88.37185513453758]
We propose a method to visualize and understand the class-wise knowledge learned by deep neural networks (DNNs) under different settings.
Our method searches for a single predictive pattern in the pixel space to represent the knowledge learned by the model for each class.
In the adversarial setting, we show that adversarially trained models tend to learn more simplified shape patterns.
arXiv Detail & Related papers (2021-01-18T06:38:41Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.