Fairness in Autonomous Driving: Towards Understanding Confounding Factors in Object Detection under Challenging Weather
- URL: http://arxiv.org/abs/2406.00219v1
- Date: Fri, 31 May 2024 22:35:10 GMT
- Title: Fairness in Autonomous Driving: Towards Understanding Confounding Factors in Object Detection under Challenging Weather
- Authors: Bimsara Pathiraja, Caleb Liu, Ransalu Senanayake,
- Abstract summary: This study provides an empirical analysis of fairness in detecting pedestrians in a state-of-the-art transformer-based object detector.
In addition to classical metrics, we introduce novel probability-based metrics to measure various intricate properties of object detection.
Our quantitative analysis reveals how the previously overlooked yet intuitive factors, such as the distribution of demographic groups in the scene, the severity of weather, the pedestrians' proximity to the AV, among others, affect object detection performance.
- Score: 7.736445799116692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The deployment of autonomous vehicles (AVs) is rapidly expanding to numerous cities. At the heart of AVs, the object detection module assumes a paramount role, directly influencing all downstream decision-making tasks by considering the presence of nearby pedestrians, vehicles, and more. Despite high accuracy of pedestrians detected on held-out datasets, the potential presence of algorithmic bias in such object detectors, particularly in challenging weather conditions, remains unclear. This study provides a comprehensive empirical analysis of fairness in detecting pedestrians in a state-of-the-art transformer-based object detector. In addition to classical metrics, we introduce novel probability-based metrics to measure various intricate properties of object detection. Leveraging the state-of-the-art FACET dataset and the Carla high-fidelity vehicle simulator, our analysis explores the effect of protected attributes such as gender, skin tone, and body size on object detection performance in varying environmental conditions such as ambient darkness and fog. Our quantitative analysis reveals how the previously overlooked yet intuitive factors, such as the distribution of demographic groups in the scene, the severity of weather, the pedestrians' proximity to the AV, among others, affect object detection performance. Our code is available at https://github.com/bimsarapathiraja/fair-AV.
Related papers
- CRASH: Crash Recognition and Anticipation System Harnessing with Context-Aware and Temporal Focus Attentions [13.981748780317329]
Accurately and promptly predicting accidents among surrounding traffic agents from camera footage is crucial for the safety of autonomous vehicles (AVs)
This study introduces a novel accident anticipation framework for AVs, termed CRASH.
It seamlessly integrates five components: object detector, feature extractor, object-aware module, context-aware module, and multi-layer fusion.
Our model surpasses existing top baselines in critical evaluation metrics like Average Precision (AP) and mean Time-To-Accident (mTTA)
arXiv Detail & Related papers (2024-07-25T04:12:49Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Zone Evaluation: Revealing Spatial Bias in Object Detection [69.59295428233844]
A fundamental limitation of object detectors is that they suffer from "spatial bias"
We present a new zone evaluation protocol, which measures the detection performance over zones.
For the first time, we provide numerical results, showing that the object detectors perform quite unevenly across the zones.
arXiv Detail & Related papers (2023-10-20T01:44:49Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Dynamic and Static Object Detection Considering Fusion Regions and
Point-wise Features [7.41540085468436]
This paper proposes a new approach to detect static and dynamic objects in front of an autonomous vehicle.
Our approach can also get other characteristics from the objects detected, like their position, velocity, and heading.
To demonstrate our proposal's performance, we asses it through a benchmark dataset and real-world data obtained from an autonomous platform.
arXiv Detail & Related papers (2021-07-27T09:42:18Z) - On the Role of Sensor Fusion for Object Detection in Future Vehicular
Networks [25.838878314196375]
We evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate.
The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel.
arXiv Detail & Related papers (2021-04-23T18:58:37Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Object Detection Under Rainy Conditions for Autonomous Vehicles: A
Review of State-of-the-Art and Emerging Techniques [5.33024001730262]
This paper presents a tutorial on state-of-the-art techniques for mitigating the influence of rainy conditions on an autonomous vehicle's ability to detect objects.
Our goal includes surveying and analyzing the performance of object detection methods trained and tested using visual data captured under clear and rainy conditions.
arXiv Detail & Related papers (2020-06-30T02:05:10Z) - Weighted Average Precision: Adversarial Example Detection in the Visual
Perception of Autonomous Vehicles [10.72357267154474]
We propose a novel distance metric for practical autonomous driving object detection outputs.
We show how our approach outperforms existing single-frame-mAP based AE detections by increasing 17.76% accuracy of performance.
arXiv Detail & Related papers (2020-01-25T23:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.