Data and Decision Traceability for SDA TAP Lab's Prototype Battle Management System
- URL: http://arxiv.org/abs/2502.09827v2
- Date: Mon, 17 Feb 2025 08:34:43 GMT
- Title: Data and Decision Traceability for SDA TAP Lab's Prototype Battle Management System
- Authors: Latha Pratti, Samya Bagchi, Yasir Latif,
- Abstract summary: The core goal of decision traceability is to ensure transparency, accountability, and integrity within the WA system.
This is accomplished by providing a clear, auditable path from the system's inputs all the way to the final decision.
- Score: 5.451014659871832
- License:
- Abstract: Space Protocol is applying the principles derived from MITRE and NIST's Supply Chain Traceability: Manufacturing Meta-Framework (NIST IR 8536) to a complex multi party system to achieve introspection, auditing, and replay of data and decisions that ultimately lead to a end decision. The core goal of decision traceability is to ensure transparency, accountability, and integrity within the WA system. This is accomplished by providing a clear, auditable path from the system's inputs all the way to the final decision. This traceability enables the system to track the various algorithms and data flows that have influenced a particular outcome.
Related papers
- End-to-End Verifiable Decentralized Federated Learning [1.374949083138427]
Verifiable decentralized federated learning (FL) systems combine blockchains and zero-knowledge proofs (ZKP)
We propose a verifiable decentralized FL system for end-to-end integrity and authenticity of data and extending verifiability to the data source.
arXiv Detail & Related papers (2024-04-19T04:43:01Z) - DriveCoT: Integrating Chain-of-Thought Reasoning with End-to-End Driving [81.04174379726251]
This paper collects a comprehensive end-to-end driving dataset named DriveCoT.
It contains sensor data, control decisions, and chain-of-thought labels to indicate the reasoning process.
We propose a baseline model called DriveCoT-Agent, trained on our dataset, to generate chain-of-thought predictions and final decisions.
arXiv Detail & Related papers (2024-03-25T17:59:01Z) - TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection [37.394874500480206]
We propose a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models.
This is achieved via a dual-system framework that integrates cognition and decision systems.
We present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework.
arXiv Detail & Related papers (2024-02-12T16:41:54Z) - Extracting Process-Aware Decision Models from Object-Centric Process
Data [54.04724730771216]
This paper proposes the first object-centric decision-mining algorithm called Integrated Object-centric Decision Discovery Algorithm (IODDA)
IODDA is able to discover how a decision is structured as well as how a decision is made.
arXiv Detail & Related papers (2024-01-26T13:27:35Z) - Accountability in Offline Reinforcement Learning: Explaining Decisions
with a Corpus of Examples [70.84093873437425]
This paper introduces the Accountable Offline Controller (AOC) that employs the offline dataset as the Decision Corpus.
AOC operates effectively in low-data scenarios, can be extended to the strictly offline imitation setting, and displays qualities of both conservation and adaptability.
We assess AOC's performance in both simulated and real-world healthcare scenarios, emphasizing its capability to manage offline control tasks with high levels of performance while maintaining accountability.
arXiv Detail & Related papers (2023-10-11T17:20:32Z) - An End-to-End Approach for Online Decision Mining and Decision Drift
Analysis in Process-Aware Information Systems: Extended Version [0.0]
Decision mining enables the discovery of decision rules from event logs or streams.
Online decision mining enables continuous monitoring of decision rule evolution and decision drift.
This paper presents an end-to-end approach for the discovery as well as monitoring of decision points and the corresponding decision rules during runtime.
arXiv Detail & Related papers (2023-03-07T15:04:49Z) - Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal
Abstractions [59.605246463200736]
We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states.
We use state-of-the-art verification techniques to provide guarantees on the interval Markov decision process and compute a controller for which these guarantees carry over to the original control system.
arXiv Detail & Related papers (2023-01-04T10:40:30Z) - FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness,
Accountability and Transparency Algorithms in Predictive Systems [69.24490096929709]
We developed an open source Python package called FAT Forensics.
It can inspect important fairness, accountability and transparency aspects of predictive algorithms.
Our toolbox can evaluate all elements of a predictive pipeline.
arXiv Detail & Related papers (2022-09-08T13:25:02Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z) - A Conceptual Framework for Establishing Trust in Real World Intelligent
Systems [0.0]
Trust in algorithms can be established by letting users interact with the system.
Reflecting features and patterns of human understanding of a domain against algorithmic results can create awareness of such patterns.
Close inspection can be used to decide whether a solution conforms to the expectations or whether it goes beyond the expected.
arXiv Detail & Related papers (2021-04-12T12:58:47Z) - Closing the AI Accountability Gap: Defining an End-to-End Framework for
Internal Algorithmic Auditing [8.155332346712424]
We introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end.
The proposed auditing framework is intended to close the accountability gap in the development and deployment of large-scale artificial intelligence systems.
arXiv Detail & Related papers (2020-01-03T20:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.