Are Deep Neural Networks Adequate Behavioural Models of Human Visual
Perception?
- URL: http://arxiv.org/abs/2305.17023v1
- Date: Fri, 26 May 2023 15:31:06 GMT
- Title: Are Deep Neural Networks Adequate Behavioural Models of Human Visual
Perception?
- Authors: Felix A. Wichmann and Robert Geirhos
- Abstract summary: Deep neural networks (DNNs) are machine learning algorithms that have revolutionised computer vision.
We argue that it is important to distinguish between statistical tools and computational models.
We dispel a number of myths surrounding DNNs in vision science.
- Score: 8.370048099732573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are machine learning algorithms that have
revolutionised computer vision due to their remarkable successes in tasks like
object classification and segmentation. The success of DNNs as computer vision
algorithms has led to the suggestion that DNNs may also be good models of human
visual perception. We here review evidence regarding current DNNs as adequate
behavioural models of human core object recognition. To this end, we argue that
it is important to distinguish between statistical tools and computational
models, and to understand model quality as a multidimensional concept where
clarity about modelling goals is key. Reviewing a large number of
psychophysical and computational explorations of core object recognition
performance in humans and DNNs, we argue that DNNs are highly valuable
scientific tools but that as of today DNNs should only be regarded as promising
-- but not yet adequate -- computational models of human core object
recognition behaviour. On the way we dispel a number of myths surrounding DNNs
in vision science.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Dimensions underlying the representational alignment of deep neural networks with humans [3.1668470116181817]
We propose a generic framework for yielding comparable representations in humans and deep neural networks (DNNs)
Applying this framework to humans and a DNN model of natural images revealed a low-dimensional DNN embedding of both visual and semantic dimensions.
In contrast to humans, DNNs exhibited a clear dominance of visual over semantic features, indicating divergent strategies for representing images.
arXiv Detail & Related papers (2024-06-27T11:14:14Z) - Unveiling and Mitigating Generalized Biases of DNNs through the Intrinsic Dimensions of Perceptual Manifolds [46.47992213722412]
Building fair deep neural networks (DNNs) is a crucial step towards achieving trustworthy artificial intelligence.
We propose Intrinsic Dimension Regularization (IDR), which enhances the fairness and performance of models.
In various image recognition benchmark tests, IDR significantly mitigates model bias while improving its performance.
arXiv Detail & Related papers (2024-04-22T04:16:40Z) - Fixing the problems of deep neural networks will require better training
data and learning algorithms [20.414456664907316]
We argue that DNNs are poor models of biological vision because they rely on strategies that differ markedly from those of humans.
We show that this problem is worsening as DNNs are becoming larger-scale and increasingly more accurate.
arXiv Detail & Related papers (2023-09-26T03:09:00Z) - Performance-optimized deep neural networks are evolving into worse
models of inferotemporal visual cortex [8.45100792118802]
We show that object recognition accuracy of deep neural networks (DNNs) correlates with their ability to predict neural responses to natural images in the inferotemporal (IT) cortex.
Our results suggest that harmonized DNNs break the trade-off between ImageNet accuracy and neural prediction accuracy.
arXiv Detail & Related papers (2023-06-06T15:34:45Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Harmonizing the object recognition strategies of deep neural networks
with humans [10.495114898741205]
We show that state-of-the-art deep neural networks (DNNs) are becoming less aligned with humans as their accuracy improves.
Our work represents the first demonstration that the scaling laws that are guiding the design of DNNs today have also produced worse models of human vision.
arXiv Detail & Related papers (2022-11-08T20:03:49Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - A neural network walks into a lab: towards using deep nets as models for
human behavior [0.0]
We argue why deep neural network models have the potential to be interesting models of human behavior.
We discuss how that potential can be more fully realized.
arXiv Detail & Related papers (2020-05-02T11:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.