CarDD: A New Dataset for Vision-based Car Damage Detection
- URL: http://arxiv.org/abs/2211.00945v2
- Date: Mon, 28 Aug 2023 11:36:06 GMT
- Title: CarDD: A New Dataset for Vision-based Car Damage Detection
- Authors: Xinkuang Wang, Wenjing Li, Zhongcheng Wu
- Abstract summary: We contribute with Car Damage Detection (CarDD), the first public large-scale dataset designed for vision-based car damage detection and segmentation.
Our CarDD contains 4,000 highresolution car damage images with over 9,000 well-annotated instances of six damage categories.
We detail the image collection, selection, and annotation processes, and present a statistical dataset analysis.
- Score: 13.284578516117804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic car damage detection has attracted significant attention in the car
insurance business. However, due to the lack of high-quality and publicly
available datasets, we can hardly learn a feasible model for car damage
detection. To this end, we contribute with Car Damage Detection (CarDD), the
first public large-scale dataset designed for vision-based car damage detection
and segmentation. Our CarDD contains 4,000 highresolution car damage images
with over 9,000 well-annotated instances of six damage categories. We detail
the image collection, selection, and annotation processes, and present a
statistical dataset analysis. Furthermore, we conduct extensive experiments on
CarDD with state-of-the-art deep methods for different tasks and provide
comprehensive analyses to highlight the specialty of car damage detection.
CarDD dataset and the source code are available at
https://cardd-ustc.github.io.
Related papers
- RDD4D: 4D Attention-Guided Road Damage Detection And Classification [15.300130944077704]
We present a novel dataset for road damage detection that captures the diverse road damage types in individual images.
We also provide our model, RDD4D, that exploits Attention4D blocks, enabling better feature refinement across multiple scales.
arXiv Detail & Related papers (2025-01-06T07:48:04Z) - A Large-Scale Car Parts (LSCP) Dataset for Lightweight Fine-Grained
Detection [0.23020018305241333]
This paper presents a large-scale and fine-grained automotive dataset consisting of 84,162 images for detecting 12 different types of car parts.
To alleviate the burden of manual annotation, we propose a novel semi-supervised auto-labeling method.
We also study the limitations of the Grounding DINO approach for zero-shot labeling.
arXiv Detail & Related papers (2023-11-20T13:30:42Z) - Cross-Domain Car Detection Model with Integrated Convolutional Block
Attention Mechanism [3.3843451892622576]
Cross-domain car target detection model with integrated convolutional block Attention mechanism is proposed.
Experimental results show that the performance of the model improves by 40% over the model without our framework.
arXiv Detail & Related papers (2023-05-31T17:28:13Z) - DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - TAD: A Large-Scale Benchmark for Traffic Accidents Detection from Video
Surveillance [2.1076255329439304]
Existing datasets in traffic accidents are either small-scale, not from surveillance cameras, not open-sourced, or not built for freeway scenes.
After integration and annotation by various dimensions, a large-scale traffic accidents dataset named TAD is proposed in this work.
arXiv Detail & Related papers (2022-09-26T03:00:50Z) - Blind-Spot Collision Detection System for Commercial Vehicles Using
Multi Deep CNN Architecture [0.17499351967216337]
Two convolutional neural networks (CNNs) based on high-level feature descriptors are proposed to detect blind-spot collisions for heavy vehicles.
A fusion approach is proposed to integrate two pre-trained networks for extracting high level features for blind-spot vehicle detection.
The fusion of features significantly improves the performance of faster R-CNN and outperformed the existing state-of-the-art methods.
arXiv Detail & Related papers (2022-08-17T11:10:37Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - The Devil is in the Details: Self-Supervised Attention for Vehicle
Re-Identification [75.3310894042132]
Self-supervised Attention for Vehicle Re-identification (SAVER) is a novel approach to effectively learn vehicle-specific discriminative features.
We show that SAVER improves upon the state-of-the-art on challenging VeRi, VehicleID, Vehicle-1M and VERI-Wild datasets.
arXiv Detail & Related papers (2020-04-14T02:24:47Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.