Kinship Verification Based on Cross-Generation Feature Interaction
Learning
- URL: http://arxiv.org/abs/2109.02809v1
- Date: Tue, 7 Sep 2021 01:50:50 GMT
- Title: Kinship Verification Based on Cross-Generation Feature Interaction
Learning
- Authors: Guan-Nan Dong, Chi-Man Pun, Zheng Zhang
- Abstract summary: Kinship verification from facial images has been recognized as an emerging yet challenging technique in computer vision applications.
We propose a novel cross-generation feature interaction learning (CFIL) framework for robust kinship verification.
- Score: 53.62256887837659
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Kinship verification from facial images has been recognized as an emerging
yet challenging technique in many potential computer vision applications. In
this paper, we propose a novel cross-generation feature interaction learning
(CFIL) framework for robust kinship verification. Particularly, an effective
collaborative weighting strategy is constructed to explore the characteristics
of cross-generation relations by corporately extracting features of both
parents and children image pairs. Specifically, we take parents and children as
a whole to extract the expressive local and non-local features. Different from
the traditional works measuring similarity by distance, we interpolate the
similarity calculations as the interior auxiliary weights into the deep CNN
architecture to learn the whole and natural features. These similarity weights
not only involve corresponding single points but also excavate the multiple
relationships cross points, where local and non-local features are calculated
by using these two kinds of distance measurements. Importantly, instead of
separately conducting similarity computation and feature extraction, we
integrate similarity learning and feature extraction into one unified learning
process. The integrated representations deduced from local and non-local
features can comprehensively express the informative semantics embedded in
images and preserve abundant correlation knowledge from image pairs. Extensive
experiments demonstrate the efficiency and superiority of the proposed model
compared to some state-of-the-art kinship verification methods.
Related papers
- GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric Learning [51.677086019209554]
We propose a Generalized Structural Sparse to capture powerful relationships across modalities for pair-wise similarity learning.
The distance metric delicately encapsulates two formats of diagonal and block-diagonal terms.
Experiments on cross-modal and two extra uni-modal retrieval tasks have validated its superiority and flexibility.
arXiv Detail & Related papers (2024-10-20T03:45:50Z) - Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Multi-scale Target-Aware Framework for Constrained Image Splicing
Detection and Localization [11.803255600587308]
We propose a multi-scale target-aware framework to couple feature extraction and correlation matching in a unified pipeline.
Our approach can effectively promote the collaborative learning of related patches, and perform mutual promotion of feature learning and correlation matching.
Our experiments demonstrate that our model, which uses a unified pipeline, outperforms state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2023-08-18T07:38:30Z) - Part Aware Contrastive Learning for Self-Supervised Action Recognition [18.423841093299135]
This paper proposes an attention-based contrastive learning framework for skeleton representation learning, called SkeAttnCLR.
Our proposed SkeAttnCLR outperforms state-of-the-art methods on NTURGB+D, NTU120-RGB+D, and PKU-MMD datasets.
arXiv Detail & Related papers (2023-05-01T05:31:48Z) - PGGANet: Pose Guided Graph Attention Network for Person
Re-identification [0.0]
Person re-identification (ReID) aims at retrieving a person from images captured by different cameras.
It has been proved that using local features together with global feature of person image could help to give robust feature representations for person retrieval.
We propose a pose guided graph attention network, a multi-branch architecture consisting of one branch for global feature, one branch for mid-granular body features and one branch for fine-granular key point features.
arXiv Detail & Related papers (2021-11-29T09:47:39Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Geometrically Mappable Image Features [85.81073893916414]
Vision-based localization of an agent in a map is an important problem in robotics and computer vision.
We propose a method that learns image features targeted for image-retrieval-based localization.
arXiv Detail & Related papers (2020-03-21T15:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.