Hard negative examples are hard, but useful
- URL: http://arxiv.org/abs/2007.12749v2
- Date: Thu, 25 Feb 2021 22:40:51 GMT
- Title: Hard negative examples are hard, but useful
- Authors: Hong Xuan, Abby Stylianou, Xiaotong Liu, Robert Pless
- Abstract summary: We characterize the space of triplets and derive why hard negatives make triplet loss training fail.
We offer a simple fix to the loss function and show that, with this fix, optimizing with hard negative examples becomes feasible.
This leads to more generalizable features, and image retrieval results that outperform state of the art for datasets with high intra-class variance.
- Score: 12.120041613482558
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Triplet loss is an extremely common approach to distance metric learning.
Representations of images from the same class are optimized to be mapped closer
together in an embedding space than representations of images from different
classes. Much work on triplet losses focuses on selecting the most useful
triplets of images to consider, with strategies that select dissimilar examples
from the same class or similar examples from different classes. The consensus
of previous research is that optimizing with the \textit{hardest} negative
examples leads to bad training behavior. That's a problem -- these hardest
negatives are literally the cases where the distance metric fails to capture
semantic similarity. In this paper, we characterize the space of triplets and
derive why hard negatives make triplet loss training fail. We offer a simple
fix to the loss function and show that, with this fix, optimizing with hard
negative examples becomes feasible. This leads to more generalizable features,
and image retrieval results that outperform state of the art for datasets with
high intra-class variance.
Related papers
- Active Mining Sample Pair Semantics for Image-text Matching [6.370886833310617]
This paper proposes a novel image-text matching model, called Active Mining Sample Pair Semantics image-text matching model (AMSPS)
Compared with the single semantic learning mode of the commonsense learning model with triplet loss function, AMSPS is an active learning idea.
arXiv Detail & Related papers (2023-11-09T15:03:57Z) - Graph Self-Contrast Representation Learning [14.519482062111507]
We propose a novel graph self-contrast framework GraphSC.
It only uses one positive and one negative sample, and chooses triplet loss as the objective.
We conduct extensive experiments to evaluate the performance of GraphSC against 19 other state-of-the-art methods.
arXiv Detail & Related papers (2023-09-05T15:13:48Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Do Lessons from Metric Learning Generalize to Image-Caption Retrieval? [67.45267657995748]
The triplet loss with semi-hard negatives has become the de facto choice for image-caption retrieval (ICR) methods that are optimized from scratch.
Recent progress in metric learning has given rise to new loss functions that outperform the triplet loss on tasks such as image retrieval and representation learning.
We ask whether these findings generalize to the setting of ICR by comparing three loss functions on two ICR methods.
arXiv Detail & Related papers (2022-02-14T15:18:00Z) - Instance-wise Hard Negative Example Generation for Contrastive Learning
in Unpaired Image-to-Image Translation [102.99799162482283]
We present instance-wise hard Negative Example Generation for Contrastive learning in Unpaired image-to-image Translation (NEGCUT)
Specifically, we train a generator to produce negative examples online. The generator is novel from two perspectives: 1) it is instance-wise which means that the generated examples are based on the input image, and 2) it can generate hard negative examples since it is trained with an adversarial loss.
arXiv Detail & Related papers (2021-08-10T09:44:59Z) - ISD: Self-Supervised Learning by Iterative Similarity Distillation [39.60300771234578]
We introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs.
Our method achieves better results compared to state-of-the-art models like BYOL and MoCo on transfer learning settings.
arXiv Detail & Related papers (2020-12-16T20:50:17Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z) - Delving into Inter-Image Invariance for Unsupervised Visual
Representations [108.33534231219464]
We present a study to better understand the role of inter-image invariance learning.
Online labels converge faster than offline labels.
Semi-hard negative samples are more reliable and unbiased than hard negative samples.
arXiv Detail & Related papers (2020-08-26T17:44:23Z) - Adaptive Offline Quintuplet Loss for Image-Text Matching [102.50814151323965]
Existing image-text matching approaches typically leverage triplet loss with online hard negatives to train the model.
We propose solutions by sampling negatives offline from the whole training set.
We evaluate the proposed training approach on three state-of-the-art image-text models on the MS-COCO and Flickr30K datasets.
arXiv Detail & Related papers (2020-03-07T22:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.