Synthetic Data-based Detection of Zebras in Drone Imagery
- URL: http://arxiv.org/abs/2305.00432v2
- Date: Tue, 4 Jul 2023 10:43:22 GMT
- Title: Synthetic Data-based Detection of Zebras in Drone Imagery
- Authors: Elia Bonetto and Aamir Ahmad
- Abstract summary: We present an approach for training an animal detector using only synthetic data.
The dataset includes RGB, depth, skeletal joint locations, pose, shape and instance segmentations for each subject.
We show that we can detect zebras by using only synthetic data during training.
- Score: 0.8249180979158817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, there is a wide availability of datasets that enable the training
of common object detectors or human detectors. These come in the form of
labelled real-world images and require either a significant amount of human
effort, with a high probability of errors such as missing labels, or very
constrained scenarios, e.g. VICON systems. On the other hand, uncommon
scenarios, like aerial views, animals, like wild zebras, or difficult-to-obtain
information, such as human shapes, are hardly available. To overcome this,
synthetic data generation with realistic rendering technologies has recently
gained traction and advanced research areas such as target tracking and human
pose estimation. However, subjects such as wild animals are still usually not
well represented in such datasets. In this work, we first show that a
pre-trained YOLO detector can not identify zebras in real images recorded from
aerial viewpoints. To solve this, we present an approach for training an animal
detector using only synthetic data. We start by generating a novel synthetic
zebra dataset using GRADE, a state-of-the-art framework for data generation.
The dataset includes RGB, depth, skeletal joint locations, pose, shape and
instance segmentations for each subject. We use this to train a YOLO detector
from scratch. Through extensive evaluations of our model with real-world data
from i) limited datasets available on the internet and ii) a new one collected
and manually labelled by us, we show that we can detect zebras by using only
synthetic data during training. The code, results, trained models, and both the
generated and training data are provided as open-source at
https://eliabntt.github.io/grade-rr.
Related papers
- ZebraPose: Zebra Detection and Pose Estimation using only Synthetic Data [0.2302001830524133]
We use synthetic data generated with a 3D simulator to obtain the first synthetic dataset that can be used for both detection and 2D pose estimation of zebras.
We extensively train and benchmark our detection and 2D pose estimation models on multiple real-world and synthetic datasets.
These experiments show how the models trained from scratch and only with synthetic data can consistently generalize to real-world images of zebras.
arXiv Detail & Related papers (2024-08-20T13:28:37Z) - Learning Human Action Recognition Representations Without Real Humans [66.61527869763819]
We present a benchmark that leverages real-world videos with humans removed and synthetic data containing virtual humans to pre-train a model.
We then evaluate the transferability of the representation learned on this data to a diverse set of downstream action recognition benchmarks.
Our approach outperforms previous baselines by up to 5%.
arXiv Detail & Related papers (2023-11-10T18:38:14Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards [68.95806641664713]
We explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection.
We train a YOLOv5m object detection model to predict apples in a real-world apple detection dataset.
Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images.
arXiv Detail & Related papers (2023-06-20T09:46:01Z) - Learning from synthetic data generated with GRADE [0.6982738885923204]
We present a framework for generating realistic animated dynamic environments (GRADE) for robotics research.
GRADE supports full simulation control, ROS integration, realistic physics, while being in an engine that produces high visual fidelity images and ground truth data.
We show that, even training using only synthetic data, can generalize well to real-world images in the same application domain.
arXiv Detail & Related papers (2023-05-07T14:13:04Z) - Bridging the Gap to Real-World Object-Centric Learning [66.55867830853803]
We show that reconstructing features from models trained in a self-supervised manner is a sufficient training signal for object-centric representations to arise in a fully unsupervised way.
Our approach, DINOSAUR, significantly out-performs existing object-centric learning models on simulated data.
arXiv Detail & Related papers (2022-09-29T15:24:47Z) - PeopleSansPeople: A Synthetic Data Generator for Human-Centric Computer
Vision [3.5694949627557846]
We release a human-centric synthetic data generator PeopleSansPeople.
It contains simulation-ready 3D human assets, a parameterized lighting and camera system, and generates 2D and 3D bounding box, instance and semantic segmentation, and COCO pose labels.
arXiv Detail & Related papers (2021-12-17T02:33:31Z) - Fake It Till You Make It: Face analysis in the wild using synthetic data
alone [9.081019005437309]
We show that it is possible to perform face-related computer vision in the wild using synthetic data alone.
We describe how to combine a procedurally-generated 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism.
arXiv Detail & Related papers (2021-09-30T13:07:04Z) - REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter [52.117388513480435]
We present a new dataset named regrad to sustain the modeling of relationships among objects and grasps.
Our dataset is collected in both forms of 2D images and 3D point clouds.
Users are free to import their own object models for the generation of as many data as they want.
arXiv Detail & Related papers (2021-04-29T05:31:21Z) - Hidden Footprints: Learning Contextual Walkability from 3D Human Trails [70.01257397390361]
Current datasets only tell you where people are, not where they could be.
We first augment the set of valid, labeled walkable regions by propagating person observations between images, utilizing 3D information to create what we call hidden footprints.
We devise a training strategy designed for such sparse labels, combining a class-balanced classification loss with a contextual adversarial loss.
arXiv Detail & Related papers (2020-08-19T23:19:08Z) - Virtual to Real adaptation of Pedestrian Detectors [9.432150710329607]
ViPeD is a new synthetically generated set of images collected with the graphical engine of the video game GTA V - Grand Theft Auto V.
We propose two different Domain Adaptation techniques suitable for the pedestrian detection task, but possibly applicable to general object detection.
Experiments show that the network trained with ViPeD can generalize over unseen real-world scenarios better than the detector trained over real-world data.
arXiv Detail & Related papers (2020-01-09T14:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.