Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval
- URL: http://arxiv.org/abs/2007.12163v2
- Date: Tue, 8 Sep 2020 18:02:12 GMT
- Title: Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval
- Authors: Andrew Brown, Weidi Xie, Vicky Kalogeiton, Andrew Zisserman
- Abstract summary: Smooth-AP is a plug-and-play objective function that allows for end-to-end training of deep networks.
We apply Smooth-AP to standard retrieval benchmarks: Stanford Online products and VehicleID.
We also evaluate on larger-scale datasets: INaturalist for fine-grained category retrieval, VGGFace2 and IJB-C for face retrieval.
- Score: 94.73459295405507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimising a ranking-based metric, such as Average Precision (AP), is
notoriously challenging due to the fact that it is non-differentiable, and
hence cannot be optimised directly using gradient-descent methods. To this end,
we introduce an objective that optimises instead a smoothed approximation of
AP, coined Smooth-AP. Smooth-AP is a plug-and-play objective function that
allows for end-to-end training of deep networks with a simple and elegant
implementation. We also present an analysis for why directly optimising the
ranking based metric of AP offers benefits over other deep metric learning
losses. We apply Smooth-AP to standard retrieval benchmarks: Stanford Online
products and VehicleID, and also evaluate on larger-scale datasets: INaturalist
for fine-grained category retrieval, and VGGFace2 and IJB-C for face retrieval.
In all cases, we improve the performance over the state-of-the-art, especially
for larger-scale datasets, thus demonstrating the effectiveness and scalability
of Smooth-AP to real-world scenarios.
Related papers
- Reward-Augmented Data Enhances Direct Preference Alignment of LLMs [63.32585910975191]
We introduce reward-conditioned Large Language Models (LLMs) that learn from the entire spectrum of response quality within the dataset.
We propose an effective yet simple data relabeling method that conditions the preference pairs on quality scores to construct a reward-augmented dataset.
arXiv Detail & Related papers (2024-10-10T16:01:51Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Adaptive Neural Ranking Framework: Toward Maximized Business Goal for
Cascade Ranking Systems [33.46891569350896]
Cascade ranking is widely used for large-scale top-k selection problems in online advertising and recommendation systems.
Previous works on learning-to-rank usually focus on letting the model learn the complete order or top-k order.
We name this method as Adaptive Neural Ranking Framework (abbreviated as ARF)
arXiv Detail & Related papers (2023-10-16T14:43:02Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Revisiting AP Loss for Dense Object Detection: Adaptive Ranking Pair
Selection [19.940491797959407]
In this work, we revisit the average precision (AP)loss and reveal that the crucial element is that of selecting the ranking pairs between positive and negative samples.
We propose two strategies to improve the AP loss. The first is a novel Adaptive Pairwise Error (APE) loss that focusing on ranking pairs in both positive and negative samples.
Experiments conducted on the MSCOCO dataset support our analysis and demonstrate the superiority of our proposed method compared with current classification and ranking loss.
arXiv Detail & Related papers (2022-07-25T10:33:06Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Robust and Decomposable Average Precision for Image Retrieval [0.0]
In image retrieval, standard evaluation metrics rely on score ranking, e.g. average precision (AP)
In this paper, we introduce a method for robust and decomposable average precision (ROADMAP)
We address two major challenges for end-to-end training of deep neural networks with AP: non-differentiability and non-decomposability.
arXiv Detail & Related papers (2021-10-01T12:00:43Z) - Stochastic Optimization of Areas Under Precision-Recall Curves with
Provable Convergence [66.83161885378192]
Area under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems.
We propose a technical method to optimize AUPRC for deep learning.
arXiv Detail & Related papers (2021-04-18T06:22:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.