Informative Sample-Aware Proxy for Deep Metric Learning
- URL: http://arxiv.org/abs/2211.10382v1
- Date: Fri, 18 Nov 2022 17:25:25 GMT
- Title: Informative Sample-Aware Proxy for Deep Metric Learning
- Authors: Aoyu Li, Ikuro Sato, Kohta Ishikawa, Rei Kawakami, Rio Yokota
- Abstract summary: In existing methods, a relatively small number of samples can produce large gradient magnitudes.
We propose a novel proxy-based method called Informative Sample-Aware Proxy ( Proxy-ISA)
It modifies a gradient weighting factor for each sample using a scheduled threshold function, so that the model is more sensitive to the informative samples.
- Score: 7.624717642858549
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Among various supervised deep metric learning methods proxy-based approaches
have achieved high retrieval accuracies. Proxies, which are
class-representative points in an embedding space, receive updates based on
proxy-sample similarities in a similar manner to sample representations. In
existing methods, a relatively small number of samples can produce large
gradient magnitudes (ie, hard samples), and a relatively large number of
samples can produce small gradient magnitudes (ie, easy samples); these can
play a major part in updates. Assuming that acquiring too much sensitivity to
such extreme sets of samples would deteriorate the generalizability of a
method, we propose a novel proxy-based method called Informative Sample-Aware
Proxy (Proxy-ISA), which directly modifies a gradient weighting factor for each
sample using a scheduled threshold function, so that the model is more
sensitive to the informative samples. Extensive experiments on the
CUB-200-2011, Cars-196, Stanford Online Products and In-shop Clothes Retrieval
datasets demonstrate the superiority of Proxy-ISA compared with the
state-of-the-art methods.
Related papers
- When Can Proxies Improve the Sample Complexity of Preference Learning? [63.660855773627524]
We address the problem of reward hacking, where maximising a proxy reward does not necessarily increase the true reward.
We outline a set of sufficient conditions on proxy feedback that, if satisfied, indicate that proxy data can provably improve the sample complexity of learning the ground truth policy.
arXiv Detail & Related papers (2024-12-21T04:07:17Z) - Data Pruning via Moving-one-Sample-out [61.45441981346064]
We propose a novel data-pruning approach called moving-one-sample-out (MoSo)
MoSo aims to identify and remove the least informative samples from the training set.
Experimental results demonstrate that MoSo effectively mitigates severe performance degradation at high pruning ratios.
arXiv Detail & Related papers (2023-10-23T08:00:03Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution
Samples [19.311470287767385]
We propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning.
Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings.
arXiv Detail & Related papers (2022-06-08T18:59:21Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z) - Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer
Proxies [65.92826041406802]
We propose a Proxy-based deep Graph Metric Learning approach from the perspective of graph classification.
Multiple global proxies are leveraged to collectively approximate the original data points for each class.
We design a novel reverse label propagation algorithm, by which the neighbor relationships are adjusted according to ground-truth labels.
arXiv Detail & Related papers (2020-10-26T14:52:42Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.