High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification
- URL: http://arxiv.org/abs/2003.08177v4
- Date: Thu, 2 Apr 2020 03:40:39 GMT
- Title: High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification
- Authors: Guan'an Wang, Shuo Yang, Huanyu Liu, Zhicheng Wang, Yang Yang,
Shuliang Wang, Gang Yu, Erjin Zhou and Jian Sun
- Abstract summary: We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
- Score: 84.43394420267794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occluded person re-identification (ReID) aims to match occluded person images
to holistic ones across dis-joint cameras. In this paper, we propose a novel
framework by learning high-order relation and topology information for
discriminative features and robust alignment. At first, we use a CNN backbone
and a key-points estimation model to extract semantic local features. Even so,
occluded images still suffer from occlusion and outliers. Then, we view the
local features of an image as nodes of a graph and propose an adaptive
direction graph convolutional (ADGC)layer to pass relation information between
nodes. The proposed ADGC layer can automatically suppress the message-passing
of meaningless features by dynamically learning di-rection and degree of
linkage. When aligning two groups of local features from two images, we view it
as a graph matching problem and propose a cross-graph embedded-alignment (CGEA)
layer to jointly learn and embed topology information to local features, and
straightly predict similarity score. The proposed CGEA layer not only take full
use of alignment learned by graph matching but also re-place sensitive
one-to-one matching with a robust soft one. Finally, extensive experiments on
occluded, partial, and holistic ReID tasks show the effectiveness of our
proposed method. Specifically, our framework significantly outperforms
state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
Related papers
- Redundancy-Free Self-Supervised Relational Learning for Graph Clustering [13.176413653235311]
We propose a novel self-supervised deep graph clustering method named Redundancy-Free Graph Clustering (R$2$FGC)
It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder.
Our experiments are performed on widely used benchmark datasets to validate the superiority of our R$2$FGC over state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-09T06:18:50Z) - Occ$^2$Net: Robust Image Matching Based on 3D Occupancy Estimation for
Occluded Regions [14.217367037250296]
Occ$2$Net is an image matching method that models occlusion relations using 3D occupancy and infers matching points in occluded regions.
We evaluate our method on both real-world and simulated datasets and demonstrate its superior performance over state-of-the-art methods on several metrics.
arXiv Detail & Related papers (2023-08-14T13:09:41Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - DenseGAP: Graph-Structured Dense Correspondence Learning with Anchor
Points [15.953570826460869]
Establishing dense correspondence between two images is a fundamental computer vision problem.
We introduce DenseGAP, a new solution for efficient Dense correspondence learning with a Graph-structured neural network conditioned on Anchor Points.
Our method advances the state-of-the-art of correspondence learning on most benchmarks.
arXiv Detail & Related papers (2021-12-13T18:59:30Z) - Spatial-spectral Hyperspectral Image Classification via Multiple Random
Anchor Graphs Ensemble Learning [88.60285937702304]
This paper proposes a novel spatial-spectral HSI classification method via multiple random anchor graphs ensemble learning (RAGE)
Firstly, the local binary pattern is adopted to extract the more descriptive features on each selected band, which preserves local structures and subtle changes of a region.
Secondly, the adaptive neighbors assignment is introduced in the construction of anchor graph, to reduce the computational complexity.
arXiv Detail & Related papers (2021-03-25T09:31:41Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z) - Gradient-Induced Co-Saliency Detection [81.54194063218216]
Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images.
In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection method.
arXiv Detail & Related papers (2020-04-28T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.