Trustworthy Artificial Intelligence and Process Mining: Challenges and
Opportunities
- URL: http://arxiv.org/abs/2110.02707v1
- Date: Wed, 6 Oct 2021 12:50:47 GMT
- Title: Trustworthy Artificial Intelligence and Process Mining: Challenges and
Opportunities
- Authors: Andrew Pery, Majid Rafiei, Michael Simon, Wil M.P. van der Aalst
- Abstract summary: We show that process mining can provide a useful framework for gaining fact-based visibility to AI compliance process execution.
We provide for an automated approach to analyze, remediate and monitor uncertainty in AI regulatory compliance processes.
- Score: 0.8602553195689513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The premise of this paper is that compliance with Trustworthy AI governance
best practices and regulatory frameworks is an inherently fragmented process
spanning across diverse organizational units, external stakeholders, and
systems of record, resulting in process uncertainties and in compliance gaps
that may expose organizations to reputational and regulatory risks. Moreover,
there are complexities associated with meeting the specific dimensions of
Trustworthy AI best practices such as data governance, conformance testing,
quality assurance of AI model behaviors, transparency, accountability, and
confidentiality requirements. These processes involve multiple steps,
hand-offs, re-works, and human-in-the-loop oversight. In this paper, we
demonstrate that process mining can provide a useful framework for gaining
fact-based visibility to AI compliance process execution, surfacing compliance
bottlenecks, and providing for an automated approach to analyze, remediate and
monitor uncertainty in AI regulatory compliance processes.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Meta-Sealing: A Revolutionizing Integrity Assurance Protocol for Transparent, Tamper-Proof, and Trustworthy AI System [0.0]
This research introduces Meta-Sealing, a cryptographic framework that fundamentally changes integrity verification in AI systems.
The framework combines advanced cryptography with distributed verification, delivering tamper-evident guarantees that achieve both mathematical rigor and computational efficiency.
arXiv Detail & Related papers (2024-10-31T15:31:22Z) - Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications [0.0]
This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable.
Different case studies validate this framework by integrating AI in both academic and practical environments.
arXiv Detail & Related papers (2024-09-25T12:39:28Z) - RegNLP in Action: Facilitating Compliance Through Automated Information Retrieval and Answer Generation [51.998738311700095]
Regulatory documents, characterized by their length, complexity and frequent updates, are challenging to interpret.
RegNLP is a multidisciplinary subfield aimed at simplifying access to and interpretation of regulatory rules and obligations.
ObliQA dataset contains 27,869 questions derived from the Abu Dhabi Global Markets (ADGM) financial regulation document collection.
arXiv Detail & Related papers (2024-09-09T14:44:19Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Exploring the Relevance of Data Privacy-Enhancing Technologies for AI
Governance Use Cases [1.5293427903448022]
It is useful to view different AI governance objectives as a system of information flows.
The importance of interoperability between these different AI governance solutions becomes clear.
arXiv Detail & Related papers (2023-03-15T21:56:59Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Automated Sustainability Compliance Checking Using Process Mining and
Formal Logic [0.0]
I want to contribute to the application of compliance checking techniques for the purpose of sustainability compliance.
I want to analyse and develop data-driven approaches, which allow to automate the task of compliance checking.
arXiv Detail & Related papers (2020-06-10T11:07:57Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.