Faster Person Re-Identification
- URL: http://arxiv.org/abs/2008.06826v1
- Date: Sun, 16 Aug 2020 03:02:49 GMT
- Title: Faster Person Re-Identification
- Authors: Guan'an Wang, Shaogang Gong, Jian Cheng and Zengguang Hou
- Abstract summary: We introduce a new solution for fast ReID by formulating a novel Coarse-to-Fine hashing code search strategy.
It uses shorter codes to coarsely rank broad matching similarities and longer codes to refine only a few top candidates for more accurate instance ReID.
Experimental results on 2 datasets show that our proposed method (CtF) is not only 8% more accurate but also 5x faster than contemporary hashing ReID methods.
- Score: 68.22203008760269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast person re-identification (ReID) aims to search person images quickly and
accurately. The main idea of recent fast ReID methods is the hashing algorithm,
which learns compact binary codes and performs fast Hamming distance and
counting sort. However, a very long code is needed for high accuracy (e.g.
2048), which compromises search speed. In this work, we introduce a new
solution for fast ReID by formulating a novel Coarse-to-Fine (CtF) hashing code
search strategy, which complementarily uses short and long codes, achieving
both faster speed and better accuracy. It uses shorter codes to coarsely rank
broad matching similarities and longer codes to refine only a few top
candidates for more accurate instance ReID. Specifically, we design an
All-in-One (AiO) framework together with a Distance Threshold Optimization
(DTO) algorithm. In AiO, we simultaneously learn and enhance multiple codes of
different lengths in a single model. It learns multiple codes in a pyramid
structure, and encourage shorter codes to mimic longer codes by
self-distillation. DTO solves a complex threshold search problem by a simple
optimization process, and the balance between accuracy and speed is easily
controlled by a single parameter. It formulates the optimization target as a
$F_{\beta}$ score that can be optimised by Gaussian cumulative distribution
functions. Experimental results on 2 datasets show that our proposed method
(CtF) is not only 8% more accurate but also 5x faster than contemporary hashing
ReID methods. Compared with non-hashing ReID methods, CtF is $50\times$ faster
with comparable accuracy. Code is available at
https://github.com/wangguanan/light-reid.
Related papers
- Rethinking Code Refinement: Learning to Judge Code Efficiency [60.04718679054704]
Large Language Models (LLMs) have demonstrated impressive capabilities in understanding and generating codes.
We propose a novel method based on the code language model that is trained to judge the efficiency between two different codes.
We validate our method on multiple programming languages with multiple refinement steps, demonstrating that the proposed method can effectively distinguish between more and less efficient versions of code.
arXiv Detail & Related papers (2024-10-29T06:17:37Z) - Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix
Factorization [60.91600465922932]
We present an approach that avoids the use of a dual-encoder for retrieval, relying solely on the cross-encoder.
Our approach provides test-time recall-vs-computational cost trade-offs superior to the current widely-used methods.
arXiv Detail & Related papers (2022-10-23T00:32:04Z) - Revisiting Code Search in a Two-Stage Paradigm [67.02322603435628]
TOSS is a two-stage fusion code search framework.
It first uses IR-based and bi-encoder models to efficiently recall a small number of top-k code candidates.
It then uses fine-grained cross-encoders for finer ranking.
arXiv Detail & Related papers (2022-08-24T02:34:27Z) - Rapid Person Re-Identification via Sub-space Consistency Regularization [51.76876061721556]
Person Re-Identification (ReID) matches pedestrians across disjoint cameras.
Existing ReID methods adopting real-value feature descriptors have achieved high accuracy, but they are low in efficiency due to the slow Euclidean distance computation.
We propose a novel Sub-space Consistency Regularization (SCR) algorithm that can speed up the ReID procedure by 0.25$ times.
arXiv Detail & Related papers (2022-07-13T02:44:05Z) - Lazy and Fast Greedy MAP Inference for Determinantal Point Process [17.50810164319995]
This paper presents how to combine the ideas of "lazy" and "fast", which have been considered incompatible in the literature.
Our lazy and fast greedy algorithm achieves almost the same time as the current best one and runs faster in practice.
arXiv Detail & Related papers (2022-06-13T07:33:32Z) - Accelerating Code Search with Deep Hashing and Code Classification [64.3543949306799]
Code search is to search reusable code snippets from source code corpus based on natural languages queries.
We propose a novel method CoSHC to accelerate code search with deep hashing and code classification.
arXiv Detail & Related papers (2022-03-29T07:05:30Z) - DeSkew-LSH based Code-to-Code Recommendation Engine [3.7011129410662558]
We present emphSenatus, a new code-to-code recommendation engine for machine learning on source code.
At the core of Senatus is emphDe-Skew LSH, a new locality sensitive hashing algorithm that indexes the data for fast (sub-linear time) retrieval.
We show Senatus improves performance by 6.7% F1 and query time 16x is faster compared to Facebook Aroma on the task of code-to-code recommendation.
arXiv Detail & Related papers (2021-11-05T16:56:28Z) - Unsupervised Multi-Index Semantic Hashing [23.169142004594434]
We propose an unsupervised hashing model that learns hash codes that are both effective and highly efficient by being optimized for multi-index hashing.
We experimentally compare MISH to state-of-the-art semantic hashing baselines in the task of document similarity search.
We find that even though multi-index hashing also improves the efficiency of the baselines compared to a linear scan, they are still upwards of 33% slower than MISH.
arXiv Detail & Related papers (2021-03-26T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.