Confidence-Aware Active Feedback for Efficient Instance Search
- URL: http://arxiv.org/abs/2110.12255v1
- Date: Sat, 23 Oct 2021 16:14:03 GMT
- Title: Confidence-Aware Active Feedback for Efficient Instance Search
- Authors: Yue Zhang, Chao Liang, Longxiang Jiang
- Abstract summary: Relevance feedback is widely used in instance search (INS) tasks to further refine imperfect ranking results.
We propose a confidence-aware active feedback (CAAF) method that can efficiently select the most valuable feedback candidates.
In particular, CAAF outperforms the first-place record in the public large-scale video INS evaluation of TRECVID 2021.
- Score: 21.8172170825049
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relevance feedback is widely used in instance search (INS) tasks to further
refine imperfect ranking results, but it often comes with low interaction
efficiency. Active learning (AL) technique has achieved great success in
improving annotation efficiency in classification tasks. However, considering
irrelevant samples' diversity and class imbalance in INS tasks, existing AL
methods cannot always select the most suitable feedback candidates for INS
problems. In addition, they are often too computationally complex to be applied
in interactive INS scenario. To address the above problems, we propose a
confidence-aware active feedback (CAAF) method that can efficiently select the
most valuable feedback candidates to improve the re-ranking performance.
Specifically, inspired by the explicit sample difficulty modeling in self-paced
learning, we utilize a pairwise manifold ranking loss to evaluate the ranking
confidence of each unlabeled sample, and formulate the INS process as a
confidence-weighted manifold ranking problem. Furthermore, we introduce an
approximate optimization scheme to simplify the solution from QP problems with
constraints to closed-form expressions, and selects only the top-K samples in
the initial ranking list for INS, so that CAAF is able to handle large-scale
INS tasks in a short period of time. Extensive experiments on both image and
video INS tasks demonstrate the effectiveness of the proposed CAAF method. In
particular, CAAF outperforms the first-place record in the public large-scale
video INS evaluation of TRECVID 2021.
Related papers
- Reward-Augmented Data Enhances Direct Preference Alignment of LLMs [63.32585910975191]
We introduce reward-conditioned Large Language Models (LLMs) that learn from the entire spectrum of response quality within the dataset.
We propose an effective yet simple data relabeling method that conditions the preference pairs on quality scores to construct a reward-augmented dataset.
arXiv Detail & Related papers (2024-10-10T16:01:51Z) - Multi-Agent Reinforcement Learning from Human Feedback: Data Coverage and Algorithmic Techniques [65.55451717632317]
We study Multi-Agent Reinforcement Learning from Human Feedback (MARLHF), exploring both theoretical foundations and empirical validations.
We define the task as identifying Nash equilibrium from a preference-only offline dataset in general-sum games.
Our findings underscore the multifaceted approach required for MARLHF, paving the way for effective preference-based multi-agent systems.
arXiv Detail & Related papers (2024-09-01T13:14:41Z) - On Speeding Up Language Model Evaluation [48.51924035873411]
Development of prompt-based methods with Large Language Models (LLMs) requires making numerous decisions.
We propose a novel method to address this challenge.
We show that it can identify the top-performing method using only 5-15% of the typically needed resources.
arXiv Detail & Related papers (2024-07-08T17:48:42Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - CoFInAl: Enhancing Action Quality Assessment with Coarse-to-Fine Instruction Alignment [38.12600984070689]
Action Quality Assessment (AQA) is pivotal for quantifying actions across domains like sports and medical care.
Existing methods often rely on pre-trained backbones from large-scale action recognition datasets to boost performance on smaller AQA datasets.
We propose Coarse-to-Fine Instruction Alignment (CoFInAl) to align AQA with broader pre-trained tasks by reformulating it as a coarse-to-fine classification task.
arXiv Detail & Related papers (2024-04-22T09:03:21Z) - Debiasing Multimodal Large Language Models [61.6896704217147]
Large Vision-Language Models (LVLMs) have become indispensable tools in computer vision and natural language processing.
Our investigation reveals a noteworthy bias in the generated content, where the output is primarily influenced by the underlying Large Language Models (LLMs) prior to the input image.
To rectify these biases and redirect the model's focus toward vision information, we introduce two simple, training-free strategies.
arXiv Detail & Related papers (2024-03-08T12:35:07Z) - PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine [24.888093229577965]
We propose a simple, universal, and automatic method named PREFER to address the stated limitations.
Our PREFER achieves state-of-the-art performance in multiple types of tasks by a significant margin.
arXiv Detail & Related papers (2023-08-23T09:46:37Z) - Planning for Sample Efficient Imitation Learning [52.44953015011569]
Current imitation algorithms struggle to achieve high performance and high in-environment sample efficiency simultaneously.
We propose EfficientImitate, a planning-based imitation learning method that can achieve high in-environment sample efficiency and performance simultaneously.
Experimental results show that EI achieves state-of-the-art results in performance and sample efficiency.
arXiv Detail & Related papers (2022-10-18T05:19:26Z) - Feature Diversity Learning with Sample Dropout for Unsupervised Domain
Adaptive Person Re-identification [0.0]
This paper proposes a new approach to learn the feature representation with better generalization ability through limiting noisy pseudo labels.
We put forward a brand-new method referred as to Feature Diversity Learning (FDL) under the classic mutual-teaching architecture.
Experimental results show that our proposed FDL-SD achieves the state-of-the-art performance on multiple benchmark datasets.
arXiv Detail & Related papers (2022-01-25T10:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.