CrashCar101: Procedural Generation for Damage Assessment
- URL: http://arxiv.org/abs/2311.06536v1
- Date: Sat, 11 Nov 2023 11:12:28 GMT
- Title: CrashCar101: Procedural Generation for Damage Assessment
- Authors: Jens Parslov, Erik Riise, Dim P. Papadopoulos
- Abstract summary: We propose a procedural generation pipeline that damages 3D car models.
We obtain synthetic 2D images of damaged cars paired with pixel-accurate annotations for part and damage categories.
For part segmentation, we show that the segmentation models trained on a combination of real data and our synthetic data outperform all models trained only on real data.
- Score: 6.172653479848284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we are interested in addressing the problem of damage
assessment for vehicles, such as cars. This task requires not only detecting
the location and the extent of the damage but also identifying the damaged
part. To train a computer vision system for the semantic part and damage
segmentation in images, we need to manually annotate images with costly pixel
annotations for both part categories and damage types. To overcome this need,
we propose to use synthetic data to train these models. Synthetic data can
provide samples with high variability, pixel-accurate annotations, and
arbitrarily large training sets without any human intervention. We propose a
procedural generation pipeline that damages 3D car models and we obtain
synthetic 2D images of damaged cars paired with pixel-accurate annotations for
part and damage categories. To validate our idea, we execute our pipeline and
render our CrashCar101 dataset. We run experiments on three real datasets for
the tasks of part and damage segmentation. For part segmentation, we show that
the segmentation models trained on a combination of real data and our synthetic
data outperform all models trained only on real data. For damage segmentation,
we show the sim2real transfer ability of CrashCar101.
Related papers
- DeepDamageNet: A two-step deep-learning model for multi-disaster building damage segmentation and classification using satellite imagery [12.869300064524122]
We present a solution that performs the two most important tasks in building damage assessment, segmentation and classification, through deep-learning models.
Our best model couples a building identification semantic segmentation convolutional neural network (CNN) to a building damage classification CNN, with a combined F1 score of 0.66.
We find that though our model was able to identify buildings with relatively high accuracy, building damage classification across various disaster types is a difficult task.
arXiv Detail & Related papers (2024-05-08T04:21:03Z) - Pedestrian Environment Model for Automated Driving [54.16257759472116]
We propose an environment model that includes the position of the pedestrians as well as their pose information.
We extract the skeletal information with a neural network human pose estimator from the image.
To obtain the 3D information of the position, we aggregate the data from consecutive frames in conjunction with the vehicle position.
arXiv Detail & Related papers (2023-08-17T16:10:58Z) - CarPatch: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle
Components [77.33782775860028]
We introduce CarPatch, a novel synthetic benchmark of vehicles.
In addition to a set of images annotated with their intrinsic and extrinsic camera parameters, the corresponding depth maps and semantic segmentation masks have been generated for each view.
Global and part-based metrics have been defined and used to evaluate, compare, and better characterize some state-of-the-art techniques.
arXiv Detail & Related papers (2023-07-24T11:59:07Z) - Joint one-sided synthetic unpaired image translation and segmentation
for colorectal cancer prevention [16.356954231068077]
We produce realistic synthetic images using a combination of 3D technologies and generative adversarial networks.
We propose CUT-seg, a joint training where a segmentation model and a generative model are jointly trained to produce realistic images.
As a part of this study we release Synth-Colon, an entirely synthetic dataset that includes 20000 realistic colon images.
arXiv Detail & Related papers (2023-07-20T22:09:04Z) - Synthetic Data for Object Classification in Industrial Applications [53.180678723280145]
In object classification, capturing a large number of images per object and in different conditions is not always possible.
This work explores the creation of artificial images using a game engine to cope with limited data in the training dataset.
arXiv Detail & Related papers (2022-12-09T11:43:04Z) - Towards 3D Scene Understanding by Referring Synthetic Models [65.74211112607315]
Methods typically alleviate on-extensive annotations on real scene scans.
We explore how synthetic models rely on real scene categories of synthetic features to a unified feature space.
Experiments show that our method achieves the average mAP of 46.08% on the ScanNet S3DIS dataset and 55.49% by learning datasets.
arXiv Detail & Related papers (2022-03-20T13:06:15Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Recovering and Simulating Pedestrians in the Wild [81.38135735146015]
We propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around.
We incorporate the reconstructed pedestrian assets bank in a realistic 3D simulation system.
We show that the simulated LiDAR data can be used to significantly reduce the amount of real-world data required for visual perception tasks.
arXiv Detail & Related papers (2020-11-16T17:16:32Z) - PennSyn2Real: Training Object Recognition Models without Human Labeling [12.923677573437699]
We propose PennSyn2Real - a synthetic dataset consisting of more than 100,000 4K images of more than 20 types of micro aerial vehicles (MAVs)
The dataset can be used to generate arbitrary numbers of training images for high-level computer vision tasks such as MAV detection and classification.
We show that synthetic data generated using this framework can be directly used to train CNN models for common object recognition tasks such as detection and segmentation.
arXiv Detail & Related papers (2020-09-22T02:53:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.