Discriminative Feature and Dictionary Learning with Part-aware Model for
Vehicle Re-identification
- URL: http://arxiv.org/abs/2003.07139v1
- Date: Mon, 16 Mar 2020 12:15:31 GMT
- Title: Discriminative Feature and Dictionary Learning with Part-aware Model for
Vehicle Re-identification
- Authors: Huibing Wang, Jinjia Peng, Guangqi Jiang, Fengqiang Xu, Xianping Fu
- Abstract summary: Vehicle re-identification (re-ID) technology is a challenging task since vehicles of the same design or manufacturer show similar appearance.
We propose Triplet Center Loss based Part-aware Model ( TCPM) that leverages the discriminative features in part details of vehicles to refine the accuracy of vehicle re-identification.
Our proposed TCPM has an enormous preference over the existing state-of-the-art methods on benchmark datasets VehicleID and VeRi-776.
- Score: 13.556590446194283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of smart cities, urban surveillance video analysis will
play a further significant role in intelligent transportation systems.
Identifying the same target vehicle in large datasets from non-overlapping
cameras should be highlighted, which has grown into a hot topic in promoting
intelligent transportation systems. However, vehicle re-identification (re-ID)
technology is a challenging task since vehicles of the same design or
manufacturer show similar appearance. To fill these gaps, we tackle this
challenge by proposing Triplet Center Loss based Part-aware Model (TCPM) that
leverages the discriminative features in part details of vehicles to refine the
accuracy of vehicle re-identification. TCPM base on part discovery is that
partitions the vehicle from horizontal and vertical directions to strengthen
the details of the vehicle and reinforce the internal consistency of the parts.
In addition, to eliminate intra-class differences in local regions of the
vehicle, we propose external memory modules to emphasize the consistency of
each part to learn the discriminating features, which forms a global dictionary
over all categories in dataset. In TCPM, triplet-center loss is introduced to
ensure each part of vehicle features extracted has intra-class consistency and
inter-class separability. Experimental results show that our proposed TCPM has
an enormous preference over the existing state-of-the-art methods on benchmark
datasets VehicleID and VeRi-776.
Related papers
- VehicleGAN: Pair-flexible Pose Guided Image Synthesis for Vehicle Re-identification [27.075761782915496]
This paper proposes to synthesize a large number of vehicle images in the target pose.
Considering the paired data of the same vehicles in different traffic surveillance cameras might be not available in the real world, we propose VehicleGAN.
Because of the feature distribution difference between real and synthetic data, we propose a new Joint Metric Learning (JML) via effective feature-level fusion.
arXiv Detail & Related papers (2023-11-27T19:34:04Z) - Multi-query Vehicle Re-identification: Viewpoint-conditioned Network,
Unified Dataset and New Metric [30.344288906037345]
We propose a more realistic and easily accessible task, called multi-query vehicle Re-ID.
We design a novel viewpoint-conditioned network (VCNet), which adaptively combines the complementary information from different vehicle viewpoints.
Second, we create a unified benchmark dataset, taken by 6142 cameras from a real-life transportation surveillance system.
Third, we design a new evaluation metric, called mean cross-scene precision (mCSP), which measures the ability of cross-scene recognition.
arXiv Detail & Related papers (2023-05-25T06:22:03Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Discriminative-Region Attention and Orthogonal-View Generation Model for
Vehicle Re-Identification [7.5366501970852955]
Multiple challenges hamper the applications of vision-based vehicle Re-ID methods.
The proposed DRA model can automatically extract the discriminative region features, which can distinguish similar vehicles.
And the OVG model can generate multi-view features based on the input view features to reduce the impact of viewpoint mismatches.
arXiv Detail & Related papers (2022-04-28T07:46:03Z) - Pluggable Weakly-Supervised Cross-View Learning for Accurate Vehicle
Re-Identification [53.6218051770131]
Cross-view consistent feature representation is key for accurate vehicle ReID.
Existing approaches resort to supervised cross-view learning using extensive extra viewpoints annotations.
We present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID.
arXiv Detail & Related papers (2021-03-09T11:51:09Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Discriminative Feature Representation with Spatio-temporal Cues for
Vehicle Re-identification [0.0]
Vehicle-identification (re-ID) aims to discover and match the target vehicles from a gallery image set taken by different cameras on a wide range of road networks.
We propose a feature representation with novel clues (DFR-ST) for vehicle re-ID.
It is capable of building robust features in the embedding space by involving appearance and retemporal information.
arXiv Detail & Related papers (2020-11-13T10:50:21Z) - Discovering Discriminative Geometric Features with Self-Supervised
Attention for Vehicle Re-Identification and Beyond [23.233398760777494]
em first to successfully learn discriminative geometric features for vehicle ReID based on self-supervised attention.
We implement an end-to-end trainable deep network architecture consisting of three branches.
We conduct comprehensive experiments on three benchmark datasets for vehicle ReID, ie VeRi-776, CityFlow-ReID, and VehicleID, and demonstrate our state-of-the-art performance.
arXiv Detail & Related papers (2020-10-19T04:43:56Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - The Devil is in the Details: Self-Supervised Attention for Vehicle
Re-Identification [75.3310894042132]
Self-supervised Attention for Vehicle Re-identification (SAVER) is a novel approach to effectively learn vehicle-specific discriminative features.
We show that SAVER improves upon the state-of-the-art on challenging VeRi, VehicleID, Vehicle-1M and VERI-Wild datasets.
arXiv Detail & Related papers (2020-04-14T02:24:47Z) - Parsing-based View-aware Embedding Network for Vehicle Re-Identification [138.11983486734576]
We propose a parsing-based view-aware embedding network (PVEN) to achieve the view-aware feature alignment and enhancement for vehicle ReID.
The experiments conducted on three datasets show that our model outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-04-10T13:06:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.