GP22: A Car Styling Dataset for Automotive Designers
- URL: http://arxiv.org/abs/2207.01760v1
- Date: Tue, 5 Jul 2022 01:39:34 GMT
- Title: GP22: A Car Styling Dataset for Automotive Designers
- Authors: Gyunpyo Lee, Taesu Kim, Hyeon-Jeong Suk
- Abstract summary: GP22 is composed of car styling features defined by automotive designers.
The dataset contains 1480 car side profile images from 37 brands and ten car segments.
- Score: 7.6702700993064115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An automated design data archiving could reduce the time wasted by designers
from working creatively and effectively. Though many datasets on classifying,
detecting, and instance segmenting on car exterior exist, these large datasets
are not relevant for design practices as the primary purpose lies in autonomous
driving or vehicle verification. Therefore, we release GP22, composed of car
styling features defined by automotive designers. The dataset contains 1480 car
side profile images from 37 brands and ten car segments. It also contains
annotations of design features that follow the taxonomy of the car exterior
design features defined in the eye of the automotive designer. We trained the
baseline model using YOLO v5 as the design feature detection model with the
dataset. The presented model resulted in an mAP score of 0.995 and a recall of
0.984. Furthermore, exploration of the model performance on sketches and
rendering images of the car side profile implies the scalability of the dataset
for design purposes.
Related papers
- AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Structural Information Guided Multimodal Pre-training for
Vehicle-centric Perception [36.92036421490819]
We propose a novel vehicle-centric pre-training framework called VehicleMAE.
We explicitly extract the sketch lines of vehicles as a form of the spatial structure to guide vehicle reconstruction.
A large-scale dataset is built to pre-train our model, termed Autobot1M, which contains about 1M vehicle images and 12693 text information.
arXiv Detail & Related papers (2023-12-15T14:10:21Z) - A Large-Scale Car Parts (LSCP) Dataset for Lightweight Fine-Grained
Detection [0.23020018305241333]
This paper presents a large-scale and fine-grained automotive dataset consisting of 84,162 images for detecting 12 different types of car parts.
To alleviate the burden of manual annotation, we propose a novel semi-supervised auto-labeling method.
We also study the limitations of the Grounding DINO approach for zero-shot labeling.
arXiv Detail & Related papers (2023-11-20T13:30:42Z) - A Car Model Identification System for Streamlining the Automobile Sales
Process [0.0]
This project presents an automated solution for the efficient identification of car models and makes from images.
We achieved a notable accuracy of 81.97% employing the EfficientNet (V2 b2) architecture.
The trained model offers the potential for automating information extraction, promising enhanced user experiences across car-selling websites.
arXiv Detail & Related papers (2023-10-19T23:36:17Z) - The Impact of Different Backbone Architecture on Autonomous Vehicle
Dataset [120.08736654413637]
The quality of the features extracted by the backbone architecture can have a significant impact on the overall detection performance.
Our study evaluates three well-known autonomous vehicle datasets, namely KITTI, NuScenes, and BDD, to compare the performance of different backbone architectures on object detection tasks.
arXiv Detail & Related papers (2023-09-15T17:32:15Z) - Multi-modal Machine Learning for Vehicle Rating Predictions Using Image,
Text, and Parametric Data [3.463438487417909]
We propose a multi-modal learning model for accurate vehicle rating predictions.
The model simultaneously learns features from the parametric specifications, text descriptions, and images of vehicles.
We find that the multi-modal model's explanatory power is 4% - 12% higher than that of the unimodal models.
arXiv Detail & Related papers (2023-05-24T14:58:49Z) - Efficient Automatic Machine Learning via Design Graphs [72.85976749396745]
We propose FALCON, an efficient sample-based method to search for the optimal model design.
FALCON features 1) a task-agnostic module, which performs message passing on the design graph via a Graph Neural Network (GNN), and 2) a task-specific module, which conducts label propagation of the known model performance information.
We empirically show that FALCON can efficiently obtain the well-performing designs for each task using only 30 explored nodes.
arXiv Detail & Related papers (2022-10-21T21:25:59Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - SoDA: Multi-Object Tracking with Soft Data Association [75.39833486073597]
Multi-object tracking (MOT) is a prerequisite for a safe deployment of self-driving cars.
We propose a novel approach to MOT that uses attention to compute track embeddings that encode dependencies between observed objects.
arXiv Detail & Related papers (2020-08-18T03:40:25Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - GISNet: Graph-Based Information Sharing Network For Vehicle Trajectory
Prediction [6.12727713172576]
Many AI-oriented companies, such as Google, Uber and DiDi, are investigating more accurate vehicle trajectory prediction algorithms.
In this paper, we propose a novel graph-based information sharing network (GISNet) that allows the information sharing between the target vehicle and its surrounding vehicles.
arXiv Detail & Related papers (2020-03-22T03:24:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.