AI Model Utilization Measurements For Finding Class Encoding Patterns
- URL: http://arxiv.org/abs/2212.06576v1
- Date: Mon, 12 Dec 2022 02:18:10 GMT
- Title: AI Model Utilization Measurements For Finding Class Encoding Patterns
- Authors: Peter Bajcsy and Antonio Cardone and Chenyi Ling and Philippe Dessauw
and Michael Majurski and Tim Blattner and Derek Juba and Walid Keyrouz
- Abstract summary: This work addresses the problems of designing utilization measurements of trained artificial intelligence (AI) models.
The problems are motivated by the lack of explainability of AI models in security and safety critical applications.
- Score: 2.702380921892937
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This work addresses the problems of (a) designing utilization measurements of
trained artificial intelligence (AI) models and (b) explaining how training
data are encoded in AI models based on those measurements. The problems are
motivated by the lack of explainability of AI models in security and safety
critical applications, such as the use of AI models for classification of
traffic signs in self-driving cars. We approach the problems by introducing
theoretical underpinnings of AI model utilization measurement and understanding
patterns in utilization-based class encodings of traffic signs at the level of
computation graphs (AI models), subgraphs, and graph nodes. Conceptually,
utilization is defined at each graph node (computation unit) of an AI model
based on the number and distribution of unique outputs in the space of all
possible outputs (tensor-states). In this work, utilization measurements are
extracted from AI models, which include poisoned and clean AI models. In
contrast to clean AI models, the poisoned AI models were trained with traffic
sign images containing systematic, physically realizable, traffic sign
modifications (i.e., triggers) to change a correct class label to another label
in a presence of such a trigger. We analyze class encodings of such clean and
poisoned AI models, and conclude with implications for trojan injection and
detection.
Related papers
- Towards a perturbation-based explanation for medical AI as differentiable programs [0.0]
In medicine and healthcare, there is a particular demand for sufficient and objective explainability of the outcome generated by AI models.
This work examines a numerical availability of the Jacobian matrix of deep learning models that measures how stably a model responses against small perturbations added to the input.
This is a first step towards a perturbation-based explanation, which will assist medical practitioners in understanding and interpreting the response of the AI model in its clinical application.
arXiv Detail & Related papers (2025-02-19T07:56:23Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - Generative Diffusion-based Contract Design for Efficient AI Twins Migration in Vehicular Embodied AI Networks [55.15079732226397]
Embodied AI is a rapidly advancing field that bridges the gap between cyberspace and physical space.
In VEANET, embodied AI twins act as in-vehicle AI assistants to perform diverse tasks supporting autonomous driving.
arXiv Detail & Related papers (2024-10-02T02:20:42Z) - Adaptation of XAI to Auto-tuning for Numerical Libraries [0.0]
Explainable AI (XAI) technology is gaining prominence, aiming to streamline AI model development and alleviate the burden of explaining AI outputs to users.
This research focuses on XAI for AI models when integrated into two different processes for practical numerical computations.
arXiv Detail & Related papers (2024-05-12T09:00:56Z) - Representing Timed Automata and Timing Anomalies of Cyber-Physical
Production Systems in Knowledge Graphs [51.98400002538092]
This paper aims to improve model-based anomaly detection in CPPS by combining the learned timed automaton with a formal knowledge graph about the system.
Both the model and the detected anomalies are described in the knowledge graph in order to allow operators an easier interpretation of the model and the detected anomalies.
arXiv Detail & Related papers (2023-08-25T15:25:57Z) - AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models [1.8752655643513647]
XAI tools can increase the vulnerability of model extraction attacks, which is a concern when model owners prefer black-box access.
We propose a novel retraining (learning) based model extraction attack framework against interpretable models under black-box settings.
We show that AUTOLYCUS is highly effective, requiring significantly fewer queries compared to state-of-the-art attacks.
arXiv Detail & Related papers (2023-02-04T13:23:39Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z) - Model-based actor-critic: GAN (model generator) + DRL (actor-critic) =>
AGI [0.0]
We propose adding an (generative/predictive) environment model to the actor-critic (model-free) architecture.
The proposed AI model is similar to (model-free) DDPG and therefore it's called model-based DDPG.
Our initial limited experiments show that DRL and GAN in model-based actor-critic results in an incremental goal-driven intellignce required to solve each task with similar performance to (model-free) DDPG.
arXiv Detail & Related papers (2020-04-04T02:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.