ICON$^2$: Reliably Benchmarking Predictive Inequity in Object Detection
- URL: http://arxiv.org/abs/2306.04482v1
- Date: Wed, 7 Jun 2023 17:42:42 GMT
- Title: ICON$^2$: Reliably Benchmarking Predictive Inequity in Object Detection
- Authors: Sruthi Sudhakar, Viraj Prabhu, Olga Russakovsky, Judy Hoffman
- Abstract summary: Concerns about social bias in computer vision systems are rising.
We introduce ICON$2$, a framework for robustly answering this question.
We conduct an in-depth study on the performance of object detection with respect to income from the BDD100K driving dataset.
- Score: 23.419153864862174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As computer vision systems are being increasingly deployed at scale in
high-stakes applications like autonomous driving, concerns about social bias in
these systems are rising. Analysis of fairness in real-world vision systems,
such as object detection in driving scenes, has been limited to observing
predictive inequity across attributes such as pedestrian skin tone, and lacks a
consistent methodology to disentangle the role of confounding variables e.g.
does my model perform worse for a certain skin tone, or are such scenes in my
dataset more challenging due to occlusion and crowds? In this work, we
introduce ICON$^2$, a framework for robustly answering this question. ICON$^2$
leverages prior knowledge on the deficiencies of object detection systems to
identify performance discrepancies across sub-populations, compute correlations
between these potential confounders and a given sensitive attribute, and
control for the most likely confounders to obtain a more reliable estimate of
model bias. Using our approach, we conduct an in-depth study on the performance
of object detection with respect to income from the BDD100K driving dataset,
revealing useful insights.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - A Reliable Framework for Human-in-the-Loop Anomaly Detection in Time Series [17.08674819906415]
We introduce HILAD, a novel framework designed to foster a dynamic and bidirectional collaboration between humans and AI.
Through our visual interface, HILAD empowers domain experts to detect, interpret, and correct unexpected model behaviors at scale.
arXiv Detail & Related papers (2024-05-06T07:44:07Z) - Run-time Introspection of 2D Object Detection in Automated Driving
Systems Using Learning Representations [13.529124221397822]
We introduce a novel introspection solution for 2D object detection based on Deep Neural Networks (DNNs)
We implement several state-of-the-art (SOTA) introspection mechanisms for error detection in 2D object detection, using one-stage and two-stage object detectors evaluated on KITTI and BDD datasets.
Our performance evaluation shows that the proposed introspection solution outperforms SOTA methods, achieving an absolute reduction in the missed error ratio of 9% to 17% in the BDD dataset.
arXiv Detail & Related papers (2024-03-02T10:56:14Z) - Automated Deception Detection from Videos: Using End-to-End Learning
Based High-Level Features and Classification Approaches [0.0]
We propose a multimodal approach combining deep learning and discriminative models for deception detection.
We employ convolutional end-to-end learning to analyze gaze, head pose, and facial expressions.
Our approach is evaluated on five datasets, including a new Rolling-Dice Experiment motivated by economic factors.
arXiv Detail & Related papers (2023-07-13T08:45:15Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - One-Shot Object Affordance Detection in the Wild [76.46484684007706]
Affordance detection refers to identifying the potential action possibilities of objects in an image.
We devise a One-Shot Affordance Detection Network (OSAD-Net) that estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images.
With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods.
arXiv Detail & Related papers (2021-08-08T14:53:10Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.