Toward Enhancing Vehicle Color Recognition in Adverse Conditions: A Dataset and Benchmark
- URL: http://arxiv.org/abs/2408.11589v2
- Date: Sun, 20 Oct 2024 15:09:14 GMT
- Title: Toward Enhancing Vehicle Color Recognition in Adverse Conditions: A Dataset and Benchmark
- Authors: Gabriel E. Lima, Rayson Laroca, Eduardo Santos, Eduil Nascimento Jr., David Menotti,
- Abstract summary: Vehicle Color Recognition (VCR) has garnered significant research interest because color is a visually distinguishable attribute of vehicles.
Despite the success of existing methods for this task, the relatively low complexity of the datasets used in the literature has been largely overlooked.
This research addresses this gap by compiling a new dataset representing a more challenging VCR scenario.
- Score: 2.326743352134195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vehicle information recognition is crucial in various practical domains, particularly in criminal investigations. Vehicle Color Recognition (VCR) has garnered significant research interest because color is a visually distinguishable attribute of vehicles and is less affected by partial occlusion and changes in viewpoint. Despite the success of existing methods for this task, the relatively low complexity of the datasets used in the literature has been largely overlooked. This research addresses this gap by compiling a new dataset representing a more challenging VCR scenario. The images - sourced from six license plate recognition datasets - are categorized into eleven colors, and their annotations were validated using official vehicle registration information. We evaluate the performance of four deep learning models on a widely adopted dataset and our proposed dataset to establish a benchmark. The results demonstrate that our dataset poses greater difficulty for the tested models and highlights scenarios that require further exploration in VCR. Remarkably, nighttime scenes account for a significant portion of the errors made by the best-performing model. This research provides a foundation for future studies on VCR, while also offering valuable insights for the field of fine-grained vehicle classification.
Related papers
- Label-Efficient 3D Object Detection For Road-Side Units [10.663986706501188]
Collaborative perception can enhance the perception of autonomous vehicles via deep information fusion with intelligent roadside units (RSU)
The data-hungry nature of these methods creates a major hurdle for their real-world deployment, particularly due to the need for annotated RSU data.
We devise a label-efficient object detection method for RSU based on unsupervised object discovery.
arXiv Detail & Related papers (2024-04-09T12:29:16Z) - Pre-Training LiDAR-Based 3D Object Detectors Through Colorization [65.03659880456048]
We introduce an innovative pre-training approach, Grounded Point Colorization (GPC), to bridge the gap between data and labels.
GPC teaches the model to colorize LiDAR point clouds, equipping it with valuable semantic cues.
Experimental results on the KITTI and datasets demonstrate GPC's remarkable effectiveness.
arXiv Detail & Related papers (2023-10-23T06:00:24Z) - LVLane: Deep Learning for Lane Detection and Classification in
Challenging Conditions [2.5641096293146712]
We present an end-to-end lane detection and classification system based on deep learning methodologies.
In our study, we introduce a unique dataset meticulously curated to encompass scenarios that pose significant challenges for state-of-the-art (SOTA) lane localization models.
We propose a CNN-based classification branch, seamlessly integrated with the detector, facilitating the identification of distinct lane types.
arXiv Detail & Related papers (2023-07-13T16:09:53Z) - On the Cross-dataset Generalization in License Plate Recognition [1.8514314381314887]
We propose a traditional-split versus leave-one-dataset-out experimental setup to empirically assess the cross-dataset generalization of 12 OCR models.
Results shed light on the limitations of the traditional-split protocol for evaluating approaches in the ALPR context.
arXiv Detail & Related papers (2022-01-02T00:56:09Z) - Pluggable Weakly-Supervised Cross-View Learning for Accurate Vehicle
Re-Identification [53.6218051770131]
Cross-view consistent feature representation is key for accurate vehicle ReID.
Existing approaches resort to supervised cross-view learning using extensive extra viewpoints annotations.
We present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID.
arXiv Detail & Related papers (2021-03-09T11:51:09Z) - RGB-D Salient Object Detection: A Survey [195.83586883670358]
We provide a comprehensive survey of RGB-D based SOD models from various perspectives.
We also review SOD models and popular benchmark datasets from this domain.
We discuss several challenges and open directions of RGB-D based SOD for future research.
arXiv Detail & Related papers (2020-08-01T10:01:32Z) - Anomalous Motion Detection on Highway Using Deep Learning [14.617786106427834]
This paper presents a new anomaly detection dataset - the Highway Traffic Anomaly (HTA) dataset.
We evaluate state-of-the-art deep learning anomaly detection models and propose novel variations to these methods.
arXiv Detail & Related papers (2020-06-15T05:40:11Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - The Devil is in the Details: Self-Supervised Attention for Vehicle
Re-Identification [75.3310894042132]
Self-supervised Attention for Vehicle Re-identification (SAVER) is a novel approach to effectively learn vehicle-specific discriminative features.
We show that SAVER improves upon the state-of-the-art on challenging VeRi, VehicleID, Vehicle-1M and VERI-Wild datasets.
arXiv Detail & Related papers (2020-04-14T02:24:47Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.