Investigating Emotion-Color Association in Deep Neural Networks
- URL: http://arxiv.org/abs/2011.11058v1
- Date: Sun, 22 Nov 2020 16:48:02 GMT
- Title: Investigating Emotion-Color Association in Deep Neural Networks
- Authors: Shivi Gupta, Shashi Kant Gupta
- Abstract summary: We show that representations learned by deep neural networks can indeed show an emotion-color association.
We also show that this method can help us in the emotion classification task, specifically when there are very few examples to train the model.
- Score: 6.85316573653194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has been found that representations learned by Deep Neural Networks (DNNs)
correlate very well to neural responses measured in primates' brains and
psychological representations exhibited by human similarity judgment. On
another hand, past studies have shown that particular colors can be associated
with specific emotion arousal in humans. Do deep neural networks also learn
this behavior? In this study, we investigate if DNNs can learn implicit
associations in stimuli, particularly, an emotion-color association between
image stimuli. Our study was conducted in two parts. First, we collected human
responses on a forced-choice decision task in which subjects were asked to
select a color for a specified emotion-inducing image. Next, we modeled this
decision task on neural networks using the similarity between deep
representation (extracted using DNNs trained on object classification tasks) of
the images and images of colors used in the task. We found that our model
showed a fuzzy linear relationship between the two decision probabilities. This
results in two interesting findings, 1. The representations learned by deep
neural networks can indeed show an emotion-color association 2. The
emotion-color association is not just random but involves some cognitive
phenomena. Finally, we also show that this method can help us in the emotion
classification task, specifically when there are very few examples to train the
model. This analysis can be relevant to psychologists studying emotion-color
associations and artificial intelligence researchers modeling emotional
intelligence in machines or studying representations learned by deep neural
networks.
Related papers
- Parallel Backpropagation for Shared-Feature Visualization [36.31730251757713]
Recent work has shown that some out-of-category stimuli also activate neurons in high-level visual brain regions.
This may be due to visual features common among the preferred class also being present in other images.
Here, we propose a deep-learning-based approach for visualizing these features.
arXiv Detail & Related papers (2024-05-16T05:56:03Z) - Identifying Interpretable Visual Features in Artificial and Biological
Neural Systems [3.604033202771937]
Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features.
Many neurons exhibit $textitmixed selectivity$, i.e., they represent multiple unrelated features.
We propose an automated method for quantifying visual interpretability and an approach for finding meaningful directions in network activation space.
arXiv Detail & Related papers (2023-10-17T17:41:28Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Deep Auto-encoder with Neural Response [8.797970797884023]
We propose a hybrid model, called deep auto-encoder with the neural response (DAE-NR)
The DAE-NR incorporates the information from the visual cortex into ANNs to achieve better image reconstruction and higher neural representation similarity between biological and artificial neurons.
Our experiments demonstrate that if and only if with the joint learning, DAE-NRs can (i.e., improve the performance of image reconstruction) and (ii. increase the representational similarity between biological neurons and artificial neurons.
arXiv Detail & Related papers (2021-11-30T11:44:17Z) - Emotion recognition in talking-face videos using persistent entropy and
neural networks [0.5156484100374059]
We use persistent entropy and neural networks as main tools to recognise and classify emotions from talking-face videos.
We prove that small changes in the video produce small changes in the signature.
These topological signatures are used to feed a neural network to distinguish between the following emotions: neutral, calm, happy, sad, angry, fearful, disgust, and surprised.
arXiv Detail & Related papers (2021-10-26T11:08:56Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - A Genetic Feature Selection Based Two-stream Neural Network for Anger
Veracity Recognition [3.885779089924737]
We use Genetic-based Feature Selection (GFS) methods to select time-series pupillary features of observers who observe acted and genuine anger of the video stimuli.
We then use the selected features to train a simple fully connected neural work and a two-stream neural network.
Our results show that the two-stream architecture is able to achieve a promising recognition result with an accuracy of 93.58% when the pupillary responses from both eyes are available.
arXiv Detail & Related papers (2020-09-06T05:52:41Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.