Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image
Translation
- URL: http://arxiv.org/abs/2204.11018v1
- Date: Sat, 23 Apr 2022 08:31:18 GMT
- Title: Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image
Translation
- Authors: Yupei Lin, Sen Zhang, Tianshui Chen, Yongyi Lu, Guangping Li and Yukai
Shi
- Abstract summary: We introduce a new negative Pruning technology for Unpaired image-to-image Translation (PUT) by sparsifying and ranking the patches.
The proposed algorithm is efficient, flexible and enables the model to learn essential information between corresponding patches stably.
- Score: 12.754320302262533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unpaired image-to-image translation aims to find a mapping between the source
domain and the target domain. To alleviate the problem of the lack of
supervised labels for the source images, cycle-consistency based methods have
been proposed for image structure preservation by assuming a reversible
relationship between unpaired images. However, this assumption only uses
limited correspondence between image pairs. Recently, contrastive learning (CL)
has been used to further investigate the image correspondence in unpaired image
translation by using patch-based positive/negative learning. Patch-based
contrastive routines obtain the positives by self-similarity computation and
recognize the rest patches as negatives. This flexible learning paradigm
obtains auxiliary contextualized information at a low cost. As the negatives
own an impressive sample number, with curiosity, we make an investigation based
on a question: are all negatives necessary for feature contrastive learning?
Unlike previous CL approaches that use negatives as much as possible, in this
paper, we study the negatives from an information-theoretic perspective and
introduce a new negative Pruning technology for Unpaired image-to-image
Translation (PUT) by sparsifying and ranking the patches. The proposed
algorithm is efficient, flexible and enables the model to learn essential
information between corresponding patches stably. By putting quality over
quantity, only a few negative patches are required to achieve better results.
Lastly, we validate the superiority, stability, and versatility of our model
through comparative experiments.
Related papers
- Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Cross-Modal Contrastive Learning for Robust Reasoning in VQA [76.1596796687494]
Multi-modal reasoning in visual question answering (VQA) has witnessed rapid progress recently.
Most reasoning models heavily rely on shortcuts learned from training data.
We propose a simple but effective cross-modal contrastive learning strategy to get rid of the shortcut reasoning.
arXiv Detail & Related papers (2022-11-21T05:32:24Z) - Modulated Contrast for Versatile Image Synthesis [60.304183493234376]
MoNCE is a versatile metric that introduces image contrast to learn a calibrated metric for the perception of multifaceted inter-image distances.
We introduce optimal transport in MoNCE to modulate the pushing force of negative samples collaboratively across multiple contrastive objectives.
arXiv Detail & Related papers (2022-03-17T14:03:46Z) - Robust Contrastive Learning against Noisy Views [79.71880076439297]
We propose a new contrastive loss function that is robust against noisy views.
We show that our approach provides consistent improvements over the state-of-the-art image, video, and graph contrastive learning benchmarks.
arXiv Detail & Related papers (2022-01-12T05:24:29Z) - Robust Contrastive Learning Using Negative Samples with Diminished
Semantics [23.38896719740166]
We show that by generating carefully designed negative samples, contrastive learning can learn more robust representations.
We develop two methods, texture-based and patch-based augmentations, to generate negative samples.
We also analyze our method and the generated texture-based samples, showing that texture features are indispensable in classifying particular ImageNet classes.
arXiv Detail & Related papers (2021-10-27T05:38:00Z) - Contrastive Unpaired Translation using Focal Loss for Patch
Classification [0.0]
Contrastive Unpaired Translation is a new method for image-to-image translation.
We show that using focal loss in place of cross-entropy loss within the PatchNCE loss can improve on the model's performance.
arXiv Detail & Related papers (2021-09-25T20:22:33Z) - Boosting Contrastive Self-Supervised Learning with False Negative
Cancellation [40.71224235172881]
A fundamental problem in contrastive learning is mitigating the effects of false negatives.
We propose novel approaches to identify false negatives, as well as two strategies to mitigate their effect.
Our method exhibits consistent improvements over existing contrastive learning-based methods.
arXiv Detail & Related papers (2020-11-23T22:17:21Z) - Delving into Inter-Image Invariance for Unsupervised Visual
Representations [108.33534231219464]
We present a study to better understand the role of inter-image invariance learning.
Online labels converge faster than offline labels.
Semi-hard negative samples are more reliable and unbiased than hard negative samples.
arXiv Detail & Related papers (2020-08-26T17:44:23Z) - Contrastive Learning for Unpaired Image-to-Image Translation [64.47477071705866]
In image-to-image translation, each patch in the output should reflect the content of the corresponding patch in the input, independent of domain.
We propose a framework based on contrastive learning to maximize mutual information between the two.
We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time.
arXiv Detail & Related papers (2020-07-30T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.