Mitigating Knowledge Discrepancies among Multiple Datasets for Task-agnostic Unified Face Alignment
- URL: http://arxiv.org/abs/2503.22359v1
- Date: Fri, 28 Mar 2025 11:59:27 GMT
- Title: Mitigating Knowledge Discrepancies among Multiple Datasets for Task-agnostic Unified Face Alignment
- Authors: Jiahao Xia, Min Xu, Wenjian Huang, Jianguo Zhang, Haimin Zhang, Chunxia Xiao,
- Abstract summary: Despite the similar structures of human faces, existing face alignment methods cannot learn unified knowledge from multiple datasets.<n>This paper presents a strategy to unify knowledge from multiple datasets.<n>The successful mitigation of discrepancies also enhances the efficiency of knowledge transferring to a novel dataset.
- Score: 30.501432077729245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the similar structures of human faces, existing face alignment methods cannot learn unified knowledge from multiple datasets with different landmark annotations. The limited training samples in a single dataset commonly result in fragile robustness in this field. To mitigate knowledge discrepancies among different datasets and train a task-agnostic unified face alignment (TUFA) framework, this paper presents a strategy to unify knowledge from multiple datasets. Specifically, we calculate a mean face shape for each dataset. To explicitly align these mean shapes on an interpretable plane based on their semantics, each shape is then incorporated with a group of semantic alignment embeddings. The 2D coordinates of these aligned shapes can be viewed as the anchors of the plane. By encoding them into structure prompts and further regressing the corresponding facial landmarks using image features, a mapping from the plane to the target faces is finally established, which unifies the learning target of different datasets. Consequently, multiple datasets can be utilized to boost the generalization ability of the model. The successful mitigation of discrepancies also enhances the efficiency of knowledge transferring to a novel dataset, significantly boosts the performance of few-shot face alignment. Additionally, the interpretable plane endows TUFA with a task-agnostic characteristic, enabling it to locate landmarks unseen during training in a zero-shot manner. Extensive experiments are carried on seven benchmarks and the results demonstrate an impressive improvement in face alignment brought by knowledge discrepancies mitigation.
Related papers
- LayerFlow: Layer-wise Exploration of LLM Embeddings using Uncertainty-aware Interlinked Projections [11.252261879736102]
LayerFlow is a visual analytics workspace that displays embeddings in an interlinked projection design.
It communicates the transformation, representation, and interpretation uncertainty.
We show the usability of the presented workspace through replication and expert case studies.
arXiv Detail & Related papers (2025-04-09T12:24:58Z) - An evaluation of Deep Learning based stereo dense matching dataset shift
from aerial images and a large scale stereo dataset [2.048226951354646]
We present a method for generating ground-truth disparity maps directly from Light Detection and Ranging (LiDAR) and images.
We evaluate 11 dense matching methods across datasets with diverse scene types, image resolutions, and geometric configurations.
arXiv Detail & Related papers (2024-02-19T20:33:46Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - FaceFusion: Exploiting Full Spectrum of Multiple Datasets [4.438240667468304]
We present a novel training method, named FaceFusion.
It creates a fused view of different datasets that is untainted by identity conflicts, while concurrently training an embedding network using the view.
Using the unified view of combined datasets enables the embedding network to be trained against the entire spectrum of the datasets, leading to a noticeable performance boost.
arXiv Detail & Related papers (2023-05-24T00:51:04Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - Scalable Self-Supervised Representation Learning from Spatiotemporal
Motion Trajectories for Multimodal Computer Vision [0.0]
We propose a self-supervised, unlabeled method for learning representations of geographic locations from GPS trajectories.
We show that reachability embeddings are semantically meaningful representations and result in 4-23% gain in performance as measured using area under precision-recall curve (AUPRC) metric.
arXiv Detail & Related papers (2022-10-07T02:41:02Z) - Detection Hub: Unifying Object Detection Datasets via Query Adaptation
on Language Embedding [137.3719377780593]
A new design (named Detection Hub) is dataset-aware and category-aligned.
It mitigates the dataset inconsistency and provides coherent guidance for the detector to learn across multiple datasets.
The categories across datasets are semantically aligned into a unified space by replacing one-hot category representations with word embedding.
arXiv Detail & Related papers (2022-06-07T17:59:44Z) - Self-Supervised Visual Representation Learning with Semantic Grouping [50.14703605659837]
We tackle the problem of learning visual representations from unlabeled scene-centric data.
We propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
arXiv Detail & Related papers (2022-05-30T17:50:59Z) - Bending Graphs: Hierarchical Shape Matching using Gated Optimal
Transport [80.64516377977183]
Shape matching has been a long-studied problem for the computer graphics and vision community.
We investigate a hierarchical learning design, to which we incorporate local patch-level information and global shape-level structures.
We propose a novel optimal transport solver by recurrently updating features on non-confident nodes to learn globally consistent correspondences between the shapes.
arXiv Detail & Related papers (2022-02-03T11:41:46Z) - Reachability Embeddings: Scalable Self-Supervised Representation
Learning from Markovian Trajectories for Geospatial Computer Vision [0.0]
We propose a self-supervised method for learning representations of geographic locations from unlabeled GPS trajectories.
A scalable and distributed algorithm is presented to compute image-like representations, called reachability summaries.
We show that reachability embeddings are semantically meaningful representations and result in 4-23% gain in performance.
arXiv Detail & Related papers (2021-10-24T20:10:22Z) - DAIL: Dataset-Aware and Invariant Learning for Face Recognition [67.4903809903022]
To achieve good performance in face recognition, a large scale training dataset is usually required.
It is problematic and troublesome to naively combine different datasets due to two major issues.
Naively treating the same person as different classes in different datasets during training will affect back-propagation.
manually cleaning labels may take formidable human efforts, especially when there are millions of images and thousands of identities.
arXiv Detail & Related papers (2021-01-14T01:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.