Distinguishing representational geometries with controversial stimuli:
Bayesian experimental design and its application to face dissimilarity
judgments
- URL: http://arxiv.org/abs/2211.15053v1
- Date: Mon, 28 Nov 2022 04:17:35 GMT
- Title: Distinguishing representational geometries with controversial stimuli:
Bayesian experimental design and its application to face dissimilarity
judgments
- Authors: Tal Golan, Wenxuan Guo, Heiko H. Sch\"utt, Nikolaus Kriegeskorte
- Abstract summary: We show that a neural network trained to invert a 3D-face-model graphics is more human-aligned than the same architecture trained on identification, classification, or autoencoding.
Our results indicate that a neural network trained to invert a 3D-face-model graphics is more human-aligned than the same architecture trained on identification, classification, or autoencoding.
- Score: 0.5735035463793008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Comparing representations of complex stimuli in neural network layers to
human brain representations or behavioral judgments can guide model
development. However, even qualitatively distinct neural network models often
predict similar representational geometries of typical stimulus sets. We
propose a Bayesian experimental design approach to synthesizing stimulus sets
for adjudicating among representational models efficiently. We apply our method
to discriminate among candidate neural network models of behavioral face
dissimilarity judgments. Our results indicate that a neural network trained to
invert a 3D-face-model graphics renderer is more human-aligned than the same
architecture trained on identification, classification, or autoencoding. Our
proposed stimulus synthesis objective is generally applicable to designing
experiments to be analyzed by representational similarity analysis for model
comparison.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Computing a human-like reaction time metric from stable recurrent vision
models [11.87006916768365]
We sketch a general-purpose methodology to construct computational accounts of reaction times from a stimulus-computable, task-optimized model.
We demonstrate that our metric aligns with patterns of human reaction times for stimulus manipulations across four disparate visual decision-making tasks.
This work paves the way for exploring the temporal alignment of model and human visual strategies in the context of various other cognitive tasks.
arXiv Detail & Related papers (2023-06-20T14:56:02Z) - Evaluating alignment between humans and neural network representations in image-based learning tasks [5.657101730705275]
We tested how well the representations of $86$ pretrained neural network models mapped to human learning trajectories.
We found that while training dataset size was a core determinant of alignment with human choices, contrastive training with multi-modal data (text and imagery) was a common feature of currently publicly available models that predicted human generalisation.
In conclusion, pretrained neural networks can serve to extract representations for cognitive models, as they appear to capture some fundamental aspects of cognition that are transferable across tasks.
arXiv Detail & Related papers (2023-06-15T08:18:29Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - On Modifying a Neural Network's Perception [3.42658286826597]
We propose a method which allows one to modify what an artificial neural network is perceiving regarding specific human-defined concepts.
We test the proposed method on different models, assessing whether the performed manipulations are well interpreted by the models, and analyzing how they react to them.
arXiv Detail & Related papers (2023-03-05T12:09:37Z) - Online simulator-based experimental design for cognitive model selection [74.76661199843284]
We propose BOSMOS: an approach to experimental design that can select between computational models without tractable likelihoods.
In simulated experiments, we demonstrate that the proposed BOSMOS technique can accurately select models in up to 2 orders of magnitude less time than existing LFI alternatives.
arXiv Detail & Related papers (2023-03-03T21:41:01Z) - Low-Light Image Restoration Based on Retina Model using Neural Networks [0.0]
The proposed neural network model saves the cost of computational overhead in contrast with traditional signal-processing models, and generates results comparable with complicated deep learning models from the subjective perspective.
This work shows that to directly simulate the functionalities of retinal neurons using neural networks not only avoids the manually seeking for the optimal parameters, but also paves the way to build corresponding artificial versions for certain neurobiological organizations.
arXiv Detail & Related papers (2022-10-04T08:14:49Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - A Deep Drift-Diffusion Model for Image Aesthetic Score Distribution
Prediction [68.76594695163386]
We propose a Deep Drift-Diffusion model inspired by psychologists to predict aesthetic score distribution from images.
The DDD model can describe the psychological process of aesthetic perception instead of traditional modeling of the results of assessment.
Our novel DDD model is simple but efficient, which outperforms the state-of-the-art methods in aesthetic score distribution prediction.
arXiv Detail & Related papers (2020-10-15T11:01:46Z) - Can you tell? SSNet -- a Sagittal Stratum-inspired Neural Network
Framework for Sentiment Analysis [1.0312968200748118]
We propose a neural network architecture that combines predictions of different models on the same text to construct robust, accurate and computationally efficient classifiers for sentiment analysis.
Among them, we propose a systematic new approach to combining multiple predictions based on a dedicated neural network and develop mathematical analysis of it along with state-of-the-art experimental results.
arXiv Detail & Related papers (2020-06-23T12:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.