Meta-Mining Discriminative Samples for Kinship Verification
- URL: http://arxiv.org/abs/2103.15108v1
- Date: Sun, 28 Mar 2021 11:47:07 GMT
- Title: Meta-Mining Discriminative Samples for Kinship Verification
- Authors: Wanhua Li, Shiwei Wang, Jiwen Lu, Jianjiang Feng, Jie Zhou
- Abstract summary: Kinship verification databases are born with unbalanced data.
We propose a Discriminative Sample Meta-Mining (DSMM) approach in this paper.
Experimental results on the widely used KinFaceW-I, KinFaceW-II, TSKinFace, and Cornell Kinship datasets demonstrate the effectiveness of the proposed approach.
- Score: 95.26341773545528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kinship verification aims to find out whether there is a kin relation for a
given pair of facial images. Kinship verification databases are born with
unbalanced data. For a database with N positive kinship pairs, we naturally
obtain N(N-1) negative pairs. How to fully utilize the limited positive pairs
and mine discriminative information from sufficient negative samples for
kinship verification remains an open issue. To address this problem, we propose
a Discriminative Sample Meta-Mining (DSMM) approach in this paper. Unlike
existing methods that usually construct a balanced dataset with fixed negative
pairs, we propose to utilize all possible pairs and automatically learn
discriminative information from data. Specifically, we sample an unbalanced
train batch and a balanced meta-train batch for each iteration. Then we learn a
meta-miner with the meta-gradient on the balanced meta-train batch. In the end,
the samples in the unbalanced train batch are re-weighted by the learned
meta-miner to optimize the kinship models. Experimental results on the widely
used KinFaceW-I, KinFaceW-II, TSKinFace, and Cornell Kinship datasets
demonstrate the effectiveness of the proposed approach.
Related papers
- DiffImpute: Tabular Data Imputation With Denoising Diffusion Probabilistic Model [9.908561639396273]
We propose DiffImpute, a novel Denoising Diffusion Probabilistic Model (DDPM)
It produces credible imputations for missing entries without undermining the authenticity of the existing data.
It can be applied to various settings of Missing Completely At Random (MCAR) and Missing At Random (MAR)
arXiv Detail & Related papers (2024-03-20T08:45:31Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be
Consistent [97.64313409741614]
We propose to enforce a emphconsistency property which states that predictions of the model on its own generated data are consistent across time.
We show that our novel training objective yields state-of-the-art results for conditional and unconditional generation in CIFAR-10 and baseline improvements in AFHQ and FFHQ.
arXiv Detail & Related papers (2023-02-17T18:45:04Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - A Penalty Approach for Normalizing Feature Distributions to Build
Confounder-Free Models [11.818509522227565]
MetaData Normalization (MDN) estimates the linear relationship between the metadata and each feature based on a non-trainable closed-form solution.
We extend the MDN method by applying a Penalty approach (referred to as PDMN)
We show improvement in model accuracy and greater independence from confounders using PMDN over MDN in a synthetic experiment and a multi-label, multi-site dataset of magnetic resonance images (MRIs)
arXiv Detail & Related papers (2022-07-11T04:02:12Z) - NP-Match: When Neural Processes meet Semi-Supervised Learning [133.009621275051]
Semi-supervised learning (SSL) has been widely explored in recent years, and it is an effective way of leveraging unlabeled data to reduce the reliance on labeled data.
In this work, we adjust neural processes (NPs) to the semi-supervised image classification task, resulting in a new method named NP-Match.
arXiv Detail & Related papers (2022-07-03T15:24:31Z) - Identifying Untrustworthy Samples: Data Filtering for Open-domain
Dialogues with Bayesian Optimization [28.22184410167622]
We present a data filtering method for open-domain dialogues.
We score training samples with a quality measure, sort them in descending order, and filter out those at the bottom.
Experimental results on two datasets show that our method can effectively identify untrustworthy samples.
arXiv Detail & Related papers (2021-09-14T06:42:54Z) - Deep Active Learning for Biased Datasets via Fisher Kernel
Self-Supervision [5.352699766206807]
Active learning (AL) aims to minimize labeling efforts for data-demanding deep neural networks (DNNs)
We propose a low-complexity method for feature density matching using self-supervised Fisher kernel (FK)
Our method outperforms state-of-the-art methods on MNIST, SVHN, and ImageNet classification while requiring only 1/10th of processing.
arXiv Detail & Related papers (2020-03-01T03:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.