Unsupervised Landmark Learning from Unpaired Data
- URL: http://arxiv.org/abs/2007.01053v1
- Date: Mon, 29 Jun 2020 13:57:20 GMT
- Title: Unsupervised Landmark Learning from Unpaired Data
- Authors: Yinghao Xu, Ceyuan Yang, Ziwei Liu, Bo Dai, Bolei Zhou
- Abstract summary: Recent attempts for unsupervised landmark learning leverage synthesized image pairs that are similar in appearance but different in poses.
We propose a cross-image cycle consistency framework which applies the swapping-reconstruction strategy twice to obtain the final supervision.
Our proposed framework is shown to outperform strong baselines by a large margin.
- Score: 117.81440795184587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent attempts for unsupervised landmark learning leverage synthesized image
pairs that are similar in appearance but different in poses. These methods
learn landmarks by encouraging the consistency between the original images and
the images reconstructed from swapped appearances and poses. While synthesized
image pairs are created by applying pre-defined transformations, they can not
fully reflect the real variances in both appearances and poses. In this paper,
we aim to open the possibility of learning landmarks on unpaired data (i.e.
unaligned image pairs) sampled from a natural image collection, so that they
can be different in both appearances and poses. To this end, we propose a
cross-image cycle consistency framework ($C^3$) which applies the
swapping-reconstruction strategy twice to obtain the final supervision.
Moreover, a cross-image flow module is further introduced to impose the
equivariance between estimated landmarks across images. Through comprehensive
experiments, our proposed framework is shown to outperform strong baselines by
a large margin. Besides quantitative results, we also provide visualization and
interpretation on our learned models, which not only verifies the effectiveness
of the learned landmarks, but also leads to important insights that are
beneficial for future research.
Related papers
- Mix-up Self-Supervised Learning for Contrast-agnostic Applications [33.807005669824136]
We present the first mix-up self-supervised learning framework for contrast-agnostic applications.
We address the low variance across images based on cross-domain mix-up and build the pretext task based on image reconstruction and transparency prediction.
arXiv Detail & Related papers (2022-04-02T16:58:36Z) - Probabilistic Warp Consistency for Weakly-Supervised Semantic
Correspondences [118.6018141306409]
We propose Probabilistic Warp Consistency, a weakly-supervised learning objective for semantic matching.
We first construct an image triplet by applying a known warp to one of the images in a pair depicting different instances of the same object class.
Our objective also brings substantial improvements in the strongly-supervised regime, when combined with keypoint annotations.
arXiv Detail & Related papers (2022-03-08T18:55:11Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Focus on the Positives: Self-Supervised Learning for Biodiversity
Monitoring [9.086207853136054]
We address the problem of learning self-supervised representations from unlabeled image collections.
We exploit readily available context data that encodes information such as the spatial and temporal relationships between the input images.
For the critical task of global biodiversity monitoring, this results in image features that can be adapted to challenging visual species classification tasks with limited human supervision.
arXiv Detail & Related papers (2021-08-14T01:12:41Z) - Object-aware Contrastive Learning for Debiased Scene Representation [74.30741492814327]
We develop a novel object-aware contrastive learning framework that localizes objects in a self-supervised manner.
We also introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning.
arXiv Detail & Related papers (2021-07-30T19:24:07Z) - Unsupervised Deep Metric Learning with Transformed Attention Consistency
and Contrastive Clustering Loss [28.17607283348278]
Existing approaches for unsupervised metric learning focus on exploring self-supervision information within the input image itself.
We observe that, when analyzing images, human eyes often compare images against each other instead of examining images individually.
We develop a new approach to unsupervised deep metric learning where the network is learned based on self-supervision information across images.
arXiv Detail & Related papers (2020-08-10T19:33:47Z) - Unsupervised Learning of Landmarks based on Inter-Intra Subject
Consistencies [72.67344725725961]
We present a novel unsupervised learning approach to image landmark discovery by incorporating the inter-subject landmark consistencies on facial images.
This is achieved via an inter-subject mapping module that transforms original subject landmarks based on an auxiliary subject-related structure.
To recover from the transformed images back to the original subject, the landmark detector is forced to learn spatial locations that contain the consistent semantic meanings both for the paired intra-subject images and between the paired inter-subject images.
arXiv Detail & Related papers (2020-04-16T20:38:16Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.