ConMAE: Contour Guided MAE for Unsupervised Vehicle Re-Identification
- URL: http://arxiv.org/abs/2302.05673v1
- Date: Sat, 11 Feb 2023 12:10:25 GMT
- Title: ConMAE: Contour Guided MAE for Unsupervised Vehicle Re-Identification
- Authors: Jing Yang, Jianwu Fang, and Hongke Xu
- Abstract summary: This work designs a Contour Guided Masked Autoencoder for Unsupervised Vehicle Re-Identification (ConMAE)
Considering that Masked Autoencoder (MAE) has shown excellent performance in self-supervised learning, this work designs a Contour Guided Masked Autoencoder for Unsupervised Vehicle Re-Identification (ConMAE)
- Score: 8.950873153831735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vehicle re-identification is a cross-view search task by matching the same
target vehicle from different perspectives. It serves an important role in
road-vehicle collaboration and intelligent road control. With the large-scale
and dynamic road environment, the paradigm of supervised vehicle
re-identification shows limited scalability because of the heavy reliance on
large-scale annotated datasets. Therefore, the unsupervised vehicle
re-identification with stronger cross-scene generalization ability has
attracted more attention. Considering that Masked Autoencoder (MAE) has shown
excellent performance in self-supervised learning, this work designs a Contour
Guided Masked Autoencoder for Unsupervised Vehicle Re-Identification (ConMAE),
which is inspired by extracting the informative contour clue to highlight the
key regions for cross-view correlation. ConMAE is implemented by preserving the
image blocks with contour pixels and randomly masking the blocks with smooth
textures. In addition, to improve the quality of pseudo labels of vehicles for
unsupervised re-identification, we design a label softening strategy and
adaptively update the label with the increase of training steps. We carry out
experiments on VeRi-776 and VehicleID datasets, and a significant performance
improvement is obtained by the comparison with the state-of-the-art
unsupervised vehicle re-identification methods. The code is available on the
website of https://github.com/2020132075/ConMAE.
Related papers
- Discriminative-Region Attention and Orthogonal-View Generation Model for
Vehicle Re-Identification [7.5366501970852955]
Multiple challenges hamper the applications of vision-based vehicle Re-ID methods.
The proposed DRA model can automatically extract the discriminative region features, which can distinguish similar vehicles.
And the OVG model can generate multi-view features based on the input view features to reduce the impact of viewpoint mismatches.
arXiv Detail & Related papers (2022-04-28T07:46:03Z) - Pluggable Weakly-Supervised Cross-View Learning for Accurate Vehicle
Re-Identification [53.6218051770131]
Cross-view consistent feature representation is key for accurate vehicle ReID.
Existing approaches resort to supervised cross-view learning using extensive extra viewpoints annotations.
We present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID.
arXiv Detail & Related papers (2021-03-09T11:51:09Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - Viewpoint-aware Progressive Clustering for Unsupervised Vehicle
Re-identification [36.60241974421236]
We propose a novel viewpoint-aware clustering algorithm for unsupervised vehicle Re-ID.
In particular, we first divide the entire feature space into different subspaces according to the predicted viewpoints and then perform a progressive clustering to mine the accurate relationship among samples.
arXiv Detail & Related papers (2020-11-18T05:40:14Z) - Discovering Discriminative Geometric Features with Self-Supervised
Attention for Vehicle Re-Identification and Beyond [23.233398760777494]
em first to successfully learn discriminative geometric features for vehicle ReID based on self-supervised attention.
We implement an end-to-end trainable deep network architecture consisting of three branches.
We conduct comprehensive experiments on three benchmark datasets for vehicle ReID, ie VeRi-776, CityFlow-ReID, and VehicleID, and demonstrate our state-of-the-art performance.
arXiv Detail & Related papers (2020-10-19T04:43:56Z) - Orientation-aware Vehicle Re-identification with Semantics-guided Part
Attention Network [33.712450134663236]
We propose a dedicated Semantics-guided Part Attention Network (SPAN) to robustly predict part attention masks for different views of vehicles.
With the help of part attention masks, we can extract discriminative features in each part separately.
Then we introduce Co-occurrence Part-attentive Distance Metric (CPDM) which places greater emphasis on co-occurrence vehicle parts.
arXiv Detail & Related papers (2020-08-26T07:33:09Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - The Devil is in the Details: Self-Supervised Attention for Vehicle
Re-Identification [75.3310894042132]
Self-supervised Attention for Vehicle Re-identification (SAVER) is a novel approach to effectively learn vehicle-specific discriminative features.
We show that SAVER improves upon the state-of-the-art on challenging VeRi, VehicleID, Vehicle-1M and VERI-Wild datasets.
arXiv Detail & Related papers (2020-04-14T02:24:47Z) - Parsing-based View-aware Embedding Network for Vehicle Re-Identification [138.11983486734576]
We propose a parsing-based view-aware embedding network (PVEN) to achieve the view-aware feature alignment and enhancement for vehicle ReID.
The experiments conducted on three datasets show that our model outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-04-10T13:06:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.