Cross-Correlated Attention Networks for Person Re-Identification
- URL: http://arxiv.org/abs/2006.09597v1
- Date: Wed, 17 Jun 2020 01:47:23 GMT
- Title: Cross-Correlated Attention Networks for Person Re-Identification
- Authors: Jieming Zhou, Soumava Kumar Roy, Pengfei Fang, Mehrtash Harandi, Lars
Petersson
- Abstract summary: We propose a new attention module called Cross-Correlated Attention (CCA)
CCA aims to overcome such limitations by maximizing the information gain between different attended regions.
We also propose a novel deep network that makes use of different attention mechanisms to learn robust and discriminative representations of person images.
- Score: 34.84287025161801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks need to make robust inference in the presence of
occlusion, background clutter, pose and viewpoint variations -- to name a few
-- when the task of person re-identification is considered. Attention
mechanisms have recently proven to be successful in handling the aforementioned
challenges to some degree. However previous designs fail to capture inherent
inter-dependencies between the attended features; leading to restricted
interactions between the attention blocks. In this paper, we propose a new
attention module called Cross-Correlated Attention (CCA); which aims to
overcome such limitations by maximizing the information gain between different
attended regions. Moreover, we also propose a novel deep network that makes use
of different attention mechanisms to learn robust and discriminative
representations of person images. The resulting model is called the
Cross-Correlated Attention Network (CCAN). Extensive experiments demonstrate
that the CCAN comfortably outperforms current state-of-the-art algorithms by a
tangible margin.
Related papers
- Attention Overlap Is Responsible for The Entity Missing Problem in Text-to-image Diffusion Models! [3.355491272942994]
This study examines three potential causes of the entity-missing problem, focusing on cross-attention dynamics.
We found that reducing overlap in attention maps between entities can effectively minimize the rate of entity missing.
arXiv Detail & Related papers (2024-10-28T12:43:48Z) - Learning to ignore: rethinking attention in CNNs [87.01305532842878]
We propose to reformulate the attention mechanism in CNNs to learn to ignore instead of learning to attend.
Specifically, we propose to explicitly learn irrelevant information in the scene and suppress it in the produced representation.
arXiv Detail & Related papers (2021-11-10T13:47:37Z) - Alignment Attention by Matching Key and Query Distributions [48.93793773929006]
This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head.
It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention.
On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks.
arXiv Detail & Related papers (2021-10-25T00:54:57Z) - Bayesian Attention Belief Networks [59.183311769616466]
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights.
We show that our method outperforms deterministic attention and state-of-the-art attention in accuracy, uncertainty estimation, generalization across domains, and adversarial attacks.
arXiv Detail & Related papers (2021-06-09T17:46:22Z) - More Than Just Attention: Learning Cross-Modal Attentions with
Contrastive Constraints [63.08768589044052]
We propose Contrastive Content Re-sourcing ( CCR) and Contrastive Content Swapping ( CCS) constraints to address such limitation.
CCR and CCS constraints supervise the training of attention models in a contrastive learning manner without requiring explicit attention annotations.
Experiments on both Flickr30k and MS-COCO datasets demonstrate that integrating these attention constraints into two state-of-the-art attention-based models improves the model performance.
arXiv Detail & Related papers (2021-05-20T08:48:10Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - Robust Facial Landmark Detection by Cross-order Cross-semantic Deep
Network [58.843211405385205]
We propose a cross-order cross-semantic deep network (CCDN) to boost the semantic features learning for robust facial landmark detection.
Specifically, a cross-order two-squeeze multi-excitation (CTM) module is proposed to introduce the cross-order channel correlations for more discriminative representations learning.
A novel cross-order cross-semantic (COCS) regularizer is designed to drive the network to learn cross-order cross-semantic features from different activation for facial landmark detection.
arXiv Detail & Related papers (2020-11-16T08:19:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.