Centre Symmetric Quadruple Pattern: A Novel Descriptor for Facial Image
Recognition and Retrieval
- URL: http://arxiv.org/abs/2201.00511v1
- Date: Mon, 3 Jan 2022 07:56:24 GMT
- Title: Centre Symmetric Quadruple Pattern: A Novel Descriptor for Facial Image
Recognition and Retrieval
- Authors: Soumendu Chakraborty, Satish Kumar Singh, and Pavan Chakraborty
- Abstract summary: Hand-crafted descriptors identify the relationships of the pixels in the local neighbourhood defined by the kernel.
In this paper we propose a hand-crafted descriptor namely Centre Symmetric Quadruple Pattern (CSQP), which encodes the facial asymmetry in quadruple space.
Result analysis shows that the proposed descriptor performs well under controlled as well as uncontrolled variations in pose, illumination, background and expressions.
- Score: 20.77994516381
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Facial features are defined as the local relationships that exist amongst the
pixels of a facial image. Hand-crafted descriptors identify the relationships
of the pixels in the local neighbourhood defined by the kernel. Kernel is a two
dimensional matrix which is moved across the facial image. Distinctive
information captured by the kernel with limited number of pixel achieves
satisfactory recognition and retrieval accuracies on facial images taken under
constrained environment (controlled variations in light, pose, expressions, and
background). To achieve similar accuracies under unconstrained environment
local neighbourhood has to be increased, in order to encode more pixels.
Increasing local neighbourhood also increases the feature length of the
descriptor. In this paper we propose a hand-crafted descriptor namely Centre
Symmetric Quadruple Pattern (CSQP), which is structurally symmetric and encodes
the facial asymmetry in quadruple space. The proposed descriptor efficiently
encodes larger neighbourhood with optimal number of binary bits. It has been
shown using average entropy, computed over feature images encoded with the
proposed descriptor, that the CSQP captures more meaningful information as
compared to state of the art descriptors. The retrieval and recognition
accuracies of the proposed descriptor has been compared with state of the art
hand-crafted descriptors (CSLBP, CSLTP, LDP, LBP, SLBP and LDGP) on bench mark
databases namely; LFW, Colour-FERET, and CASIA-face-v5. Result analysis shows
that the proposed descriptor performs well under controlled as well as
uncontrolled variations in pose, illumination, background and expressions.
Related papers
- Beyond Learned Metadata-based Raw Image Reconstruction [86.1667769209103]
Raw images have distinct advantages over sRGB images, e.g., linearity and fine-grained quantization levels.
They are not widely adopted by general users due to their substantial storage requirements.
We propose a novel framework that learns a compact representation in the latent space, serving as metadata.
arXiv Detail & Related papers (2023-06-21T06:59:07Z) - Learning-Based Dimensionality Reduction for Computing Compact and
Effective Local Feature Descriptors [101.62384271200169]
A distinctive representation of image patches in form of features is a key component of many computer vision and robotics tasks.
We investigate multi-layer perceptrons (MLPs) to extract low-dimensional but high-quality descriptors.
We consider different applications, including visual localization, patch verification, image matching and retrieval.
arXiv Detail & Related papers (2022-09-27T17:59:04Z) - PI-Trans: Parallel-ConvMLP and Implicit-Transformation Based GAN for
Cross-View Image Translation [84.97160975101718]
We propose a novel generative adversarial network, PI-Trans, which consists of a novel Parallel-ConvMLP module and an Implicit Transformation module at multiple semantic levels.
PI-Trans achieves the best qualitative and quantitative performance by a large margin compared to the state-of-the-art methods on two challenging datasets.
arXiv Detail & Related papers (2022-07-09T10:35:44Z) - Semantic-shape Adaptive Feature Modulation for Semantic Image Synthesis [71.56830815617553]
A fine-grained part-level semantic layout will benefit object details generation.
A Shape-aware Position Descriptor (SPD) is proposed to describe each pixel's positional feature.
A Semantic-shape Adaptive Feature Modulation (SAFM) block is proposed to combine the given semantic map and our positional features.
arXiv Detail & Related papers (2022-03-31T09:06:04Z) - Semi-parametric Makeup Transfer via Semantic-aware Correspondence [99.02329132102098]
Large discrepancy between source non-makeup image and reference makeup image is one of key challenges in makeup transfer.
Non-parametric techniques have a high potential for addressing the pose, expression, and occlusion discrepancies.
We propose a textbfSemi-textbfparametric textbfMakeup textbfTransfer (SpMT) method, which combines the reciprocal strengths of non-parametric and parametric mechanisms.
arXiv Detail & Related papers (2022-03-04T12:54:19Z) - Cascaded Asymmetric Local Pattern: A Novel Descriptor for Unconstrained
Facial Image Recognition and Retrieval [20.77994516381]
In this paper a novel hand crafted cascaded asymmetric local pattern (CALP) is proposed for retrieval and recognition facial image.
The proposed encoding scheme has optimum feature length and shows significant improvement in accuracy under environmental and physiological changes in a facial image.
arXiv Detail & Related papers (2022-01-03T08:23:38Z) - Local Quadruple Pattern: A Novel Descriptor for Facial Image Recognition
and Retrieval [20.77994516381]
A novel hand crafted local quadruple pattern (LQPAT) is proposed for facial image recognition and retrieval.
The proposed descriptor encodes relations amongst the neighbours in quadruple space.
The retrieval and recognition accuracies of the proposed descriptor has been compared with state of the art hand crafted descriptors on bench mark databases.
arXiv Detail & Related papers (2022-01-03T08:04:38Z) - Local Gradient Hexa Pattern: A Descriptor for Face Recognition and
Retrieval [20.77994516381]
A local gradient hexa pattern (LGHP) is proposed that identifies the relationship amongst the reference pixel and its neighboring pixels.
Discriminative information exists in the local neighborhood as well as in different derivative directions.
The proposed descriptor has better recognition as well as retrieval rates compared to state-of-the-art descriptors.
arXiv Detail & Related papers (2022-01-03T07:45:36Z) - R-Theta Local Neighborhood Pattern for Unconstrained Facial Image
Recognition and Retrieval [20.77994516381]
R-Theta Local Neighborhood Pattern (RTLNP) is proposed for facial image retrieval.
Proposed encoding scheme divides the local neighborhood into sectors of equal angular width.
Average grayscales values of these two subsectors are encoded to generate the micropatterns.
arXiv Detail & Related papers (2022-01-03T07:39:23Z) - Enhancing Multi-Scale Implicit Learning in Image Super-Resolution with
Integrated Positional Encoding [4.781615891172263]
We consider each pixel as the aggregation of signals from a local area in an image super-resolution context.
We propose integrated positional encoding (IPE), extending traditional positional encoding by aggregating frequency information over the pixel area.
We show the effectiveness of IPE-LIIF by quantitative and qualitative evaluations, and further demonstrate the generalization ability of IPE to larger image scales.
arXiv Detail & Related papers (2021-12-10T06:09:55Z) - Image Inpainting with Edge-guided Learnable Bidirectional Attention Maps [85.67745220834718]
We present an edge-guided learnable bidirectional attention map (Edge-LBAM) for improving image inpainting of irregular holes.
Our Edge-LBAM method contains dual procedures,including structure-aware mask-updating guided by predict edges.
Extensive experiments show that our Edge-LBAM is effective in generating coherent image structures and preventing color discrepancy and blurriness.
arXiv Detail & Related papers (2021-04-25T07:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.