Enhancing the Fairness and Performance of Edge Cameras with Explainable
AI
- URL: http://arxiv.org/abs/2401.09852v1
- Date: Thu, 18 Jan 2024 10:08:24 GMT
- Title: Enhancing the Fairness and Performance of Edge Cameras with Explainable
AI
- Authors: Truong Thanh Hung Nguyen, Vo Thanh Khang Nguyen, Quoc Hung Cao, Van
Binh Truong, Quoc Khanh Nguyen, Hung Cao
- Abstract summary: Our research presents a diagnostic method using Explainable AI (XAI) for model debug.
We found the training dataset as the main bias source and suggested model augmentation as a solution.
- Score: 3.4719449211802456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rising use of Artificial Intelligence (AI) in human detection on Edge
camera systems has led to accurate but complex models, challenging to interpret
and debug. Our research presents a diagnostic method using Explainable AI (XAI)
for model debugging, with expert-driven problem identification and solution
creation. Validated on the Bytetrack model in a real-world office Edge network,
we found the training dataset as the main bias source and suggested model
augmentation as a solution. Our approach helps identify model biases, essential
for achieving fair and trustworthy models.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach [2.0209172586699173]
This paper introduces a novel XAI-integrated Visual Quality Inspection framework.
Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability.
This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications.
arXiv Detail & Related papers (2024-07-16T14:30:24Z) - Improving Interpretability and Robustness for the Detection of AI-Generated Images [6.116075037154215]
We analyze existing state-of-the-art AIGI detection methods based on frozen CLIP embeddings.
We show how to interpret them, shedding light on how images produced by various AI generators differ from real ones.
arXiv Detail & Related papers (2024-06-21T10:33:09Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI [10.475941327617686]
This paper introduces a methodology for examining machine learning models operating on hyperspectral images.
We use post-hoc explanation methods from the Explainable AI (XAI) domain to critically assess the best performing model.
Our approach effectively red teams the model by pinpointing and validating key shortcomings.
arXiv Detail & Related papers (2024-03-12T18:28:32Z) - Fusing Models with Complementary Expertise [42.099743709292866]
We consider the Fusion of Experts (FoE) problem of fusing outputs of expert models with complementary knowledge of the data distribution.
Our method is applicable to both discriminative and generative tasks.
We extend our method to the "frugal" setting where it is desired to reduce the number of expert model evaluations at test time.
arXiv Detail & Related papers (2023-10-02T18:31:35Z) - A model-agnostic approach for generating Saliency Maps to explain
inferred decisions of Deep Learning Models [2.741266294612776]
We propose a model-agnostic method for generating saliency maps that has access only to the output of the model.
We use Differential Evolution to identify which image pixels are the most influential in a model's decision-making process.
arXiv Detail & Related papers (2022-09-19T10:28:37Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.