Face processing emerges from object-trained convolutional neural networks
- URL: http://arxiv.org/abs/2405.18800v1
- Date: Wed, 29 May 2024 06:35:33 GMT
- Title: Face processing emerges from object-trained convolutional neural networks
- Authors: Zhenhua Zhao, Ji Chen, Zhicheng Lin, Haojiang Ying,
- Abstract summary: Domain-general mechanism accounts posit that face processing can emerge from a neural network without specialized pre-training on faces.
We trained CNNs solely on objects and tested their ability to recognize and represent faces as well as objects that look like faces.
- Score: 3.0186977359501492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whether face processing depends on unique, domain-specific neurocognitive mechanisms or domain-general object recognition mechanisms has long been debated. Directly testing these competing hypotheses in humans has proven challenging due to extensive exposure to both faces and objects. Here, we systematically test these hypotheses by capitalizing on recent progress in convolutional neural networks (CNNs) that can be trained without face exposure (i.e., pre-trained weights). Domain-general mechanism accounts posit that face processing can emerge from a neural network without specialized pre-training on faces. Consequently, we trained CNNs solely on objects and tested their ability to recognize and represent faces as well as objects that look like faces (face pareidolia stimuli).... Due to the character limits, for more details see in attached pdf
Related papers
- Modeling biological face recognition with deep convolutional neural
networks [0.0]
Deep convolutional neural networks (DCNNs) have become the state-of-the-art computational models of biological object recognition.
Recent efforts have started to transfer this achievement to research on biological face recognition.
In this review, we summarize the first studies that use DCNNs to model biological face recognition.
arXiv Detail & Related papers (2022-08-13T16:45:30Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - TANet: A new Paradigm for Global Face Super-resolution via
Transformer-CNN Aggregation Network [72.41798177302175]
We propose a novel paradigm based on the self-attention mechanism (i.e., the core of Transformer) to fully explore the representation capacity of the facial structure feature.
Specifically, we design a Transformer-CNN aggregation network (TANet) consisting of two paths, in which one path uses CNNs responsible for restoring fine-grained facial details.
By aggregating the features from the above two paths, the consistency of global facial structure and fidelity of local facial detail restoration are strengthened simultaneously.
arXiv Detail & Related papers (2021-09-16T18:15:07Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Facial Expressions Recognition with Convolutional Neural Networks [0.0]
We will be diving into implementing a system for recognition of facial expressions (FER) by leveraging neural networks.
We demonstrate a state-of-the-art single-network-accuracy of 70.10% on the FER2013 dataset without using any additional training data.
arXiv Detail & Related papers (2021-07-19T06:41:00Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Face Hallucination via Split-Attention in Split-Attention Network [58.30436379218425]
convolutional neural networks (CNNs) have been widely employed to promote the face hallucination.
We propose a novel external-internal split attention group (ESAG) to take into account the overall facial profile and fine texture details simultaneously.
By fusing the features from these two paths, the consistency of facial structure and the fidelity of facial details are strengthened.
arXiv Detail & Related papers (2020-10-22T10:09:31Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - Salient Facial Features from Humans and Deep Neural Networks [2.5211876507510724]
We explore the features that are used by humans and by convolutional neural networks (ConvNets) to classify faces.
We use Guided Backpropagation (GB) to visualize the facial features that influence the output of a ConvNet the most when identifying specific individuals.
arXiv Detail & Related papers (2020-03-08T22:41:04Z) - Verifying Deep Learning-based Decisions for Facial Expression
Recognition [0.8137198664755597]
We classify facial expressions with a neural network and create pixel-based explanations.
We quantify these visual explanations based on a bounding-box method with respect to facial regions.
Although our results show that the neural network achieves state-of-the-art results, the evaluation of the visual explanations reveals that relevant facial regions may not be considered.
arXiv Detail & Related papers (2020-02-14T15:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.