Identification of Fine-grained Systematic Errors via Controlled Scene Generation
- URL: http://arxiv.org/abs/2404.07045v1
- Date: Wed, 10 Apr 2024 14:35:22 GMT
- Title: Identification of Fine-grained Systematic Errors via Controlled Scene Generation
- Authors: Valentyn Boreiko, Matthias Hein, Jan Hendrik Metzen,
- Abstract summary: We propose a pipeline for generating realistic synthetic scenes with fine-grained control.
Our approach, BEV2EGO, allows for a realistic generation of the complete scene with road-contingent control.
In addition, we propose a benchmark for controlled scene generation to select the most appropriate generative outpainting model for BEV2EGO.
- Score: 41.398080398462994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many safety-critical applications, especially in autonomous driving, require reliable object detectors. They can be very effectively assisted by a method to search for and identify potential failures and systematic errors before these detectors are deployed. Systematic errors are characterized by combinations of attributes such as object location, scale, orientation, and color, as well as the composition of their respective backgrounds. To identify them, one must rely on something other than real images from a test set because they do not account for very rare but possible combinations of attributes. To overcome this limitation, we propose a pipeline for generating realistic synthetic scenes with fine-grained control, allowing the creation of complex scenes with multiple objects. Our approach, BEV2EGO, allows for a realistic generation of the complete scene with road-contingent control that maps 2D bird's-eye view (BEV) scene configurations to a first-person view (EGO). In addition, we propose a benchmark for controlled scene generation to select the most appropriate generative outpainting model for BEV2EGO. We further use it to perform a systematic analysis of multiple state-of-the-art object detection models and discover differences between them.
Related papers
- Run-time Introspection of 2D Object Detection in Automated Driving
Systems Using Learning Representations [13.529124221397822]
We introduce a novel introspection solution for 2D object detection based on Deep Neural Networks (DNNs)
We implement several state-of-the-art (SOTA) introspection mechanisms for error detection in 2D object detection, using one-stage and two-stage object detectors evaluated on KITTI and BDD datasets.
Our performance evaluation shows that the proposed introspection solution outperforms SOTA methods, achieving an absolute reduction in the missed error ratio of 9% to 17% in the BDD dataset.
arXiv Detail & Related papers (2024-03-02T10:56:14Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Towards Generalizable Multi-Camera 3D Object Detection via Perspective
Debiasing [28.874014617259935]
Multi-Camera 3D Object Detection (MC3D-Det) has gained prominence with the advent of bird's-eye view (BEV) approaches.
We propose a novel method that aligns 3D detection with 2D camera plane results, ensuring consistent and accurate detections.
arXiv Detail & Related papers (2023-10-17T15:31:28Z) - Identifying Systematic Errors in Object Detectors with the SCROD
Pipeline [46.52729366461028]
The identification and removal of systematic errors in object detectors can be a prerequisite for their deployment in safety-critical applications.
We overcome this limitation by generating synthetic images with fine-granular control.
We propose a novel framework that combines the strengths of both approaches.
arXiv Detail & Related papers (2023-09-23T22:41:08Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - CrowdSim2: an Open Synthetic Benchmark for Object Detectors [0.7223361655030193]
This paper presents and publicly releases CrowdSim2, a new synthetic collection of images suitable for people and vehicle detection.
It consists of thousands of images gathered from various synthetic scenarios resembling the real world, where we varied some factors of interest.
We exploited this new benchmark as a testing ground for some state-of-the-art detectors, showing that our simulated scenarios can be a valuable tool for measuring their performances in a controlled environment.
arXiv Detail & Related papers (2023-04-11T09:35:57Z) - Reference-based Defect Detection Network [57.89399576743665]
The first issue is the texture shift which means a trained defect detector model will be easily affected by unseen texture.
The second issue is partial visual confusion which indicates that a partial defect box is visually similar with a complete box.
We propose a Reference-based Defect Detection Network (RDDN) to tackle these two problems.
arXiv Detail & Related papers (2021-08-10T05:44:23Z) - One-Shot Object Affordance Detection in the Wild [76.46484684007706]
Affordance detection refers to identifying the potential action possibilities of objects in an image.
We devise a One-Shot Affordance Detection Network (OSAD-Net) that estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images.
With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods.
arXiv Detail & Related papers (2021-08-08T14:53:10Z) - SceneChecker: Boosting Scenario Verification using Symmetry Abstractions [3.8995911009078816]
SceneChecker is a tool for verifying scenarios involving vehicles executing complex plans in large cluttered workspaces.
SceneChecker shows 20x speedup in verification time, even while using those very tools as reachability subroutines.
arXiv Detail & Related papers (2020-11-21T03:18:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.