Validation of object detection in UAV-based images using synthetic data
- URL: http://arxiv.org/abs/2201.06629v1
- Date: Mon, 17 Jan 2022 20:56:56 GMT
- Title: Validation of object detection in UAV-based images using synthetic data
- Authors: Eung-Joo Lee, Damon M. Conover, Shuvra S. Bhattacharyyaa, Heesung
Kwon, Jason Hill, Kenneth Evensen
- Abstract summary: Machine learning (ML) models for UAV-based detection are often validated using data curated for tasks unrelated to the UAV application.
Such errors arise due to differences in imaging conditions between images from UAVs and images in training.
Our work is focused on understanding the impact of different UAV-based imaging conditions on detection performance by using synthetic data generated using a game engine.
- Score: 9.189702268557483
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Object detection is increasingly used onboard Unmanned Aerial Vehicles (UAV)
for various applications; however, the machine learning (ML) models for
UAV-based detection are often validated using data curated for tasks unrelated
to the UAV application. This is a concern because training neural networks on
large-scale benchmarks have shown excellent capability in generic object
detection tasks, yet conventional training approaches can lead to large
inference errors for UAV-based images. Such errors arise due to differences in
imaging conditions between images from UAVs and images in training. To overcome
this problem, we characterize boundary conditions of ML models, beyond which
the models exhibit rapid degradation in detection accuracy. Our work is focused
on understanding the impact of different UAV-based imaging conditions on
detection performance by using synthetic data generated using a game engine.
Properties of the game engine are exploited to populate the synthetic datasets
with realistic and annotated images. Specifically, it enables the fine control
of various parameters, such as camera position, view angle, illumination
conditions, and object pose. Using the synthetic datasets, we analyze detection
accuracy in different imaging conditions as a function of the above parameters.
We use three well-known neural network models with different model complexity
in our work. In our experiment, we observe and quantify the following: 1) how
detection accuracy drops as the camera moves toward the nadir-view region; 2)
how detection accuracy varies depending on different object poses, and 3) the
degree to which the robustness of the models changes as illumination conditions
vary.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Synthetic imagery for fuzzy object detection: A comparative study [3.652647451754697]
Fuzzy object detection is a challenging field of research in computer vision (CV)
Fuzzy objects such as fire, smoke, mist, and steam present significantly greater complexities in terms of visual features.
We propose and leverage an alternative method of generating and automatically annotating fully synthetic fire images.
arXiv Detail & Related papers (2024-10-01T23:22:54Z) - AssemAI: Interpretable Image-Based Anomaly Detection for Manufacturing Pipelines [0.0]
Anomaly detection in manufacturing pipelines remains a critical challenge, intensified by the complexity and variability of industrial environments.
This paper introduces AssemAI, an interpretable image-based anomaly detection system tailored for smart manufacturing pipelines.
arXiv Detail & Related papers (2024-08-05T01:50:09Z) - Online-Adaptive Anomaly Detection for Defect Identification in Aircraft Assembly [4.387337528923525]
Anomaly detection deals with detecting deviations from established patterns within data.
We propose a novel framework for online-adaptive anomaly detection using transfer learning.
Experimental results showcase a detection accuracy exceeding 0.975, outperforming the state-of-the-art ET-NET approach.
arXiv Detail & Related papers (2024-06-18T15:11:44Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - An Outlier Exposure Approach to Improve Visual Anomaly Detection
Performance for Mobile Robots [76.36017224414523]
We consider the problem of building visual anomaly detection systems for mobile robots.
Standard anomaly detection models are trained using large datasets composed only of non-anomalous data.
We tackle the problem of exploiting these data to improve the performance of a Real-NVP anomaly detection model.
arXiv Detail & Related papers (2022-09-20T15:18:13Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Analysis and Adaptation of YOLOv4 for Object Detection in Aerial Images [0.0]
Our work shows the adaptation of the popular YOLOv4 framework for predicting the objects and their locations in aerial images.
The trained model resulted in a mean average precision (mAP) of 45.64% with an inference speed reaching 8.7 FPS on the Tesla K80 GPU.
A comparative study with several contemporary aerial object detectors proved that YOLOv4 performed better, implying a more suitable detection algorithm to incorporate on aerial platforms.
arXiv Detail & Related papers (2022-03-18T23:51:09Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Decoupled Appearance and Motion Learning for Efficient Anomaly Detection
in Surveillance Video [9.80717374118619]
We propose a new neural network architecture that learns the normal behavior in a purely unsupervised fashion.
Our model can process 16 to 45 times more frames per second than related approaches.
arXiv Detail & Related papers (2020-11-10T11:40:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.