Keypoint-Aligned Embeddings for Image Retrieval and Re-identification
- URL: http://arxiv.org/abs/2008.11368v1
- Date: Wed, 26 Aug 2020 03:56:37 GMT
- Title: Keypoint-Aligned Embeddings for Image Retrieval and Re-identification
- Authors: Olga Moskvyak, Frederic Maire, Feras Dayoub and Mahsa Baktashmotlagh
- Abstract summary: We propose to align the image embedding with a predefined order of the keypoints.
The proposed keypoint aligned embeddings model (KAE-Net) learns part-level features via multi-task learning.
It achieves state of the art performance on the benchmark datasets of CUB-200-2011, Cars196 and VeRi-776.
- Score: 15.356786390476591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning embeddings that are invariant to the pose of the object is crucial
in visual image retrieval and re-identification. The existing approaches for
person, vehicle, or animal re-identification tasks suffer from high intra-class
variance due to deformable shapes and different camera viewpoints. To overcome
this limitation, we propose to align the image embedding with a predefined
order of the keypoints. The proposed keypoint aligned embeddings model
(KAE-Net) learns part-level features via multi-task learning which is guided by
keypoint locations. More specifically, KAE-Net extracts channels from a feature
map activated by a specific keypoint through learning the auxiliary task of
heatmap reconstruction for this keypoint. The KAE-Net is compact, generic and
conceptually simple. It achieves state of the art performance on the benchmark
datasets of CUB-200-2011, Cars196 and VeRi-776 for retrieval and
re-identification tasks.
Related papers
- Learning-based Relational Object Matching Across Views [63.63338392484501]
We propose a learning-based approach which combines local keypoints with novel object-level features for matching object detections between RGB images.
We train our object-level matching features based on appearance and inter-frame and cross-frame spatial relations between objects in an associative graph neural network.
arXiv Detail & Related papers (2023-05-03T19:36:51Z) - LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - Contrastive learning of Class-agnostic Activation Map for Weakly
Supervised Object Localization and Semantic Segmentation [32.76127086403596]
We propose Contrastive learning for Class-agnostic Activation Map (C$2$AM) generation using unlabeled image data.
We form the positive and negative pairs based on the above relations and force the network to disentangle foreground and background.
As the network is guided to discriminate cross-image foreground-background, the class-agnostic activation maps learned by our approach generate more complete object regions.
arXiv Detail & Related papers (2022-03-25T08:46:24Z) - Attend and Guide (AG-Net): A Keypoints-driven Attention-based Deep
Network for Image Recognition [13.230646408771868]
We propose an end-to-end CNN model, which learns meaningful features linking fine-grained changes using our novel attention mechanism.
It captures the spatial structures in images by identifying semantic regions (SRs) and their spatial distributions, and is proved to be the key to modelling subtle changes in images.
The framework is evaluated on six diverse benchmark datasets.
arXiv Detail & Related papers (2021-10-23T09:43:36Z) - Weakly Supervised Keypoint Discovery [27.750244813890262]
We propose a method for keypoint discovery from a 2D image using image-level supervision.
Motivated by the weakly-supervised learning approach, our method exploits image-level supervision to identify discriminative parts.
Our approach achieves state-of-the-art performance for the task of keypoint estimation on the limited supervision scenarios.
arXiv Detail & Related papers (2021-09-28T01:26:53Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - End-to-End Learning of Keypoint Representations for Continuous Control
from Images [84.8536730437934]
We show that it is possible to learn efficient keypoint representations end-to-end, without the need for unsupervised pre-training, decoders, or additional losses.
Our proposed architecture consists of a differentiable keypoint extractor that feeds the coordinates directly to a soft actor-critic agent.
arXiv Detail & Related papers (2021-06-15T09:17:06Z) - Semi-supervised Keypoint Localization [12.37129078618206]
We propose to learn simultaneously keypoint heatmaps and pose invariant keypoint representations in a semi-supervised manner.
Our approach significantly outperforms previous methods on several benchmarks for human and animal body landmark localization.
arXiv Detail & Related papers (2021-01-20T06:23:08Z) - Learning to Focus: Cascaded Feature Matching Network for Few-shot Image
Recognition [38.49419948988415]
Deep networks can learn to accurately recognize objects of a category by training on a large number of images.
A meta-learning challenge known as a low-shot image recognition task comes when only a few images with annotations are available for learning a recognition model for one category.
Our method, called Cascaded Feature Matching Network (CFMN), is proposed to solve this problem.
Experiments for few-shot learning on two standard datasets, emphminiImageNet and Omniglot, have confirmed the effectiveness of our method.
arXiv Detail & Related papers (2021-01-13T11:37:28Z) - Tasks Integrated Networks: Joint Detection and Retrieval for Image
Search [99.49021025124405]
In many real-world searching scenarios (e.g., video surveillance), the objects are seldom accurately detected or annotated.
We first introduce an end-to-end Integrated Net (I-Net), which has three merits.
We further propose an improved I-Net, called DC-I-Net, which makes two new contributions.
arXiv Detail & Related papers (2020-09-03T03:57:50Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.