Human alignment of neural network representations
- URL: http://arxiv.org/abs/2211.01201v4
- Date: Mon, 3 Apr 2023 09:02:13 GMT
- Title: Human alignment of neural network representations
- Authors: Lukas Muttenthaler, Jonas Dippel, Lorenz Linhardt, Robert A.
Vandermeulen, Simon Kornblith
- Abstract summary: We investigate the factors that affect the alignment between the representations learned by neural networks and human mental representations inferred from behavioral responses.
We find that model scale and architecture have essentially no effect on the alignment with human behavioral responses.
We find that some human concepts such as food and animals are well-represented by neural networks whereas others such as royal or sports-related objects are not.
- Score: 22.671101285994013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today's computer vision models achieve human or near-human level performance
across a wide variety of vision tasks. However, their architectures, data, and
learning algorithms differ in numerous ways from those that give rise to human
vision. In this paper, we investigate the factors that affect the alignment
between the representations learned by neural networks and human mental
representations inferred from behavioral responses. We find that model scale
and architecture have essentially no effect on the alignment with human
behavioral responses, whereas the training dataset and objective function both
have a much larger impact. These findings are consistent across three datasets
of human similarity judgments collected using two different tasks. Linear
transformations of neural network representations learned from behavioral
responses from one dataset substantially improve alignment with human
similarity judgments on the other two datasets. In addition, we find that some
human concepts such as food and animals are well-represented by neural networks
whereas others such as royal or sports-related objects are not. Overall,
although models trained on larger, more diverse datasets achieve better
alignment with humans than models trained on ImageNet alone, our results
indicate that scaling alone is unlikely to be sufficient to train neural
networks with conceptual representations that match those used by humans.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.