Language Aligned Visual Representations Predict Human Behavior in
Naturalistic Learning Tasks
- URL: http://arxiv.org/abs/2306.09377v1
- Date: Thu, 15 Jun 2023 08:18:29 GMT
- Title: Language Aligned Visual Representations Predict Human Behavior in
Naturalistic Learning Tasks
- Authors: Can Demircan, Tankred Saanum, Leonardo Pettini, Marcel Binz, Blazej M
Baczkowski, Paula Kaanders, Christian F Doeller, Mona M Garvert, Eric Schulz
- Abstract summary: Humans possess the ability to identify and generalize relevant features of natural objects.
We conducted two experiments involving category learning and reward learning.
Participants successfully identified the relevant stimulus features within a few trials.
We performed an extensive model comparison, evaluating the trial-by-trial predictive accuracy of diverse deep learning models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans possess the ability to identify and generalize relevant features of
natural objects, which aids them in various situations. To investigate this
phenomenon and determine the most effective representations for predicting
human behavior, we conducted two experiments involving category learning and
reward learning. Our experiments used realistic images as stimuli, and
participants were tasked with making accurate decisions based on novel stimuli
for all trials, thereby necessitating generalization. In both tasks, the
underlying rules were generated as simple linear functions using stimulus
dimensions extracted from human similarity judgments. Notably, participants
successfully identified the relevant stimulus features within a few trials,
demonstrating effective generalization. We performed an extensive model
comparison, evaluating the trial-by-trial predictive accuracy of diverse deep
learning models' representations of human choices. Intriguingly,
representations from models trained on both text and image data consistently
outperformed models trained solely on images, even surpassing models using the
features that generated the task itself. These findings suggest that
language-aligned visual representations possess sufficient richness to describe
human generalization in naturalistic settings and emphasize the role of
language in shaping human cognition.
Related papers
- Concept Probing: Where to Find Human-Defined Concepts (Extended Version) [3.2443914909457594]
We propose a method to automatically identify which layer's representations in a neural network model should be considered when probing for a given human-defined concept of interest.<n>We validate our findings through an exhaustive empirical analysis over different neural network models and datasets.
arXiv Detail & Related papers (2025-07-24T16:30:10Z) - Concept-Guided Interpretability via Neural Chunking [54.73787666584143]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract these emerging entities, complementing each other based on label availability and dimensionality.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - Aligning Machine and Human Visual Representations across Abstraction Levels [42.86478924838503]
Deep neural networks have achieved success across a wide range of applications, including as models of human behavior in vision tasks.
However, neural network training and human learning differ in fundamental ways, and neural networks often fail to generalize as robustly as humans do.
We highlight a key misalignment between vision models and humans: whereas human conceptual knowledge is hierarchically organized from fine- to coarse-scale distinctions, model representations do not accurately capture all these levels of abstraction.
To address this misalignment, we first train a teacher model to imitate human judgments, then transfer human-like structure from its representations into pretrained state-of-the
arXiv Detail & Related papers (2024-09-10T13:41:08Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Human-Like Geometric Abstraction in Large Pre-trained Neural Networks [6.650735854030166]
We revisit empirical results in cognitive science on geometric visual processing.
We identify three key biases in geometric visual processing.
We test tasks from the literature that probe these biases in humans and find that large pre-trained neural network models used in AI demonstrate more human-like abstract geometric processing.
arXiv Detail & Related papers (2024-02-06T17:59:46Z) - On Modifying a Neural Network's Perception [3.42658286826597]
We propose a method which allows one to modify what an artificial neural network is perceiving regarding specific human-defined concepts.
We test the proposed method on different models, assessing whether the performed manipulations are well interpreted by the models, and analyzing how they react to them.
arXiv Detail & Related papers (2023-03-05T12:09:37Z) - Human alignment of neural network representations [22.671101285994013]
We investigate the factors that affect the alignment between the representations learned by neural networks and human mental representations inferred from behavioral responses.
We find that model scale and architecture have essentially no effect on the alignment with human behavioral responses.
We find that some human concepts such as food and animals are well-represented by neural networks whereas others such as royal or sports-related objects are not.
arXiv Detail & Related papers (2022-11-02T15:23:16Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Seeing eye-to-eye? A comparison of object recognition performance in
humans and deep convolutional neural networks under image manipulation [0.0]
This study aims towards a behavioral comparison of visual core object recognition performance between humans and feedforward neural networks.
Analyses of accuracy revealed that humans not only outperform DCNNs on all conditions, but also display significantly greater robustness towards shape and most notably color alterations.
arXiv Detail & Related papers (2020-07-13T10:26:30Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.