Detection of Pitt-Hopkins Syndrome based on morphological facial
features
- URL: http://arxiv.org/abs/2003.08229v2
- Date: Thu, 19 Mar 2020 08:43:41 GMT
- Title: Detection of Pitt-Hopkins Syndrome based on morphological facial
features
- Authors: Elena D'Amato, Constantino Carlos Reyes-Aldasoro, Maria Felicia
Faienza, Marcella Zollino
- Abstract summary: This work describes an automatic methodology to discriminate between individuals with Pitt-Hopkins syndrome (PTHS) and healthy individuals.
The methodology was tested on 71 individuals with PTHS and 55 healthy controls.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work describes an automatic methodology to discriminate between
individuals with the genetic disorder Pitt-Hopkins syndrome (PTHS), and healthy
individuals. As input data, the methodology accepts unconstrained frontal
facial photographs, from which faces are located with Histograms of Oriented
Gradients features descriptors. Pre-processing steps of the methodology consist
of colour normalisation, scaling down, rotation, and cropping in order to
produce a series of images of faces with consistent dimensions. Sixty eight
facial landmarks are automatically located on each face through a cascade of
regression functions learnt via gradient boosting to estimate the shape from an
initial approximation. The intensities of a sparse set of pixels indexed
relative to this initial estimate are used to determine the landmarks. A set of
carefully selected geometric features, for example, relative width of the
mouth, or angle of the nose, are extracted from the landmarks. The features are
used to investigate the statistical differences between the two populations of
PTHS and healthy controls. The methodology was tested on 71 individuals with
PTHS and 55 healthy controls. Two geometric features related to the nose and
mouth showed statistical difference between the two populations.
Related papers
- Exploring a Multimodal Fusion-based Deep Learning Network for Detecting Facial Palsy [3.2381492754749632]
We present a multimodal fusion-based deep learning model that utilizes unstructured data and structured data to detect facial palsy.
Our model slightly improved the precision score to 77.05 at the expense of a decrease in the recall score.
arXiv Detail & Related papers (2024-05-26T09:16:34Z) - COMICS: End-to-end Bi-grained Contrastive Learning for Multi-face Forgery Detection [56.7599217711363]
Face forgery recognition methods can only process one face at a time.
Most face forgery recognition methods can only process one face at a time.
We propose COMICS, an end-to-end framework for multi-face forgery detection.
arXiv Detail & Related papers (2023-08-03T03:37:13Z) - Unsupervised Anomaly Appraisal of Cleft Faces Using a StyleGAN2-based
Model Adaptation Technique [5.224306534441244]
This paper presents a novel machine learning framework to consistently detect, localize and rate congenital cleft lip anomalies in human faces.
The proposed method employs the StyleGAN2 generative adversarial network with model adaptation to produce normalized transformations of cleft-affected faces.
The anomaly scores yielded by the proposed computer model correlate closely with the human ratings of facial differences, leading to 0.942 Pearson's r score.
arXiv Detail & Related papers (2022-11-12T13:30:20Z) - Few-Shot Meta Learning for Recognizing Facial Phenotypes of Genetic
Disorders [55.41644538483948]
Automated classification and similarity retrieval aid physicians in decision-making to diagnose possible genetic conditions as early as possible.
Previous work has addressed the problem as a classification problem and used deep learning methods.
In this study, we used a facial recognition model trained on a large corpus of healthy individuals as a pre-task and transferred it to facial phenotype recognition.
arXiv Detail & Related papers (2022-10-23T11:52:57Z) - Benchmarking Joint Face Spoofing and Forgery Detection with Visual and
Physiological Cues [81.15465149555864]
We establish the first joint face spoofing and detection benchmark using both visual appearance and physiological r cues.
To enhance the r periodicity discrimination, we design a two-branch physiological network using both facial powerful rtemporal signal map and its continuous wavelet transformed counterpart as inputs.
arXiv Detail & Related papers (2022-08-10T15:41:48Z) - Real-Time Facial Expression Recognition using Facial Landmarks and
Neural Networks [0.0]
This paper presents an algorithm for feature extraction, classification of seven different emotions, and facial expression recognition in a real-time manner.
A Multi-Layer Perceptron neural network is trained based on the foregoing algorithm.
A 3-layer is trained using these feature vectors, leading to 96% accuracy on the test set.
arXiv Detail & Related papers (2022-01-31T21:38:30Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - Robust Face-Swap Detection Based on 3D Facial Shape Information [59.32489266682952]
Face-swap images and videos have attracted more and more malicious attackers to discredit some key figures.
Previous pixel-level artifacts based detection techniques always focus on some unclear patterns but ignore some available semantic clues.
We propose a biometric information based method to fully exploit the appearance and shape feature for face-swap detection of key figures.
arXiv Detail & Related papers (2021-04-28T09:35:48Z) - Automatic Quantification of Facial Asymmetry using Facial Landmarks [0.0]
One-sided facial paralysis causes uneven movements of facial muscles on the sides of the face.
This paper proposes a novel method to provide an objective and quantitative asymmetry score for frontal faces.
arXiv Detail & Related papers (2021-03-20T00:08:37Z) - Facial Manipulation Detection Based on the Color Distribution Analysis
in Edge Region [0.5735035463793008]
We present a generalized and robust facial manipulation detection method based on color distribution analysis of the vertical region of edge in a manipulated image.
Our extensive experiments show that our method outperforms other existing face manipulation detection methods on detecting synthesized face image in various datasets regardless of whether it has participated in training.
arXiv Detail & Related papers (2021-02-02T08:19:35Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.