Trustworthy Artificial Intelligence and Process Mining: Challenges and
Opportunities
- URL: http://arxiv.org/abs/2110.02707v1
- Date: Wed, 6 Oct 2021 12:50:47 GMT
- Title: Trustworthy Artificial Intelligence and Process Mining: Challenges and
Opportunities
- Authors: Andrew Pery, Majid Rafiei, Michael Simon, Wil M.P. van der Aalst
- Abstract summary: We show that process mining can provide a useful framework for gaining fact-based visibility to AI compliance process execution.
We provide for an automated approach to analyze, remediate and monitor uncertainty in AI regulatory compliance processes.
- Score: 0.8602553195689513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The premise of this paper is that compliance with Trustworthy AI governance
best practices and regulatory frameworks is an inherently fragmented process
spanning across diverse organizational units, external stakeholders, and
systems of record, resulting in process uncertainties and in compliance gaps
that may expose organizations to reputational and regulatory risks. Moreover,
there are complexities associated with meeting the specific dimensions of
Trustworthy AI best practices such as data governance, conformance testing,
quality assurance of AI model behaviors, transparency, accountability, and
confidentiality requirements. These processes involve multiple steps,
hand-offs, re-works, and human-in-the-loop oversight. In this paper, we
demonstrate that process mining can provide a useful framework for gaining
fact-based visibility to AI compliance process execution, surfacing compliance
bottlenecks, and providing for an automated approach to analyze, remediate and
monitor uncertainty in AI regulatory compliance processes.
Related papers
- Coordinated Disclosure for AI: Beyond Security Vulnerabilities [1.3225694028747144]
Algorithmic flaws in machine learning (ML) models present distinct challenges compared to traditional software vulnerabilities.
To address this gap, we propose the implementation of a dedicated Coordinated Flaw Disclosure framework.
This paper delves into the historical landscape of disclosures in ML, encompassing the ad hoc reporting of harms and the emergence of participatory auditing.
arXiv Detail & Related papers (2024-02-10T20:39:04Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Exploring the Relevance of Data Privacy-Enhancing Technologies for AI
Governance Use Cases [1.5293427903448022]
It is useful to view different AI governance objectives as a system of information flows.
The importance of interoperability between these different AI governance solutions becomes clear.
arXiv Detail & Related papers (2023-03-15T21:56:59Z) - Predictive Compliance Monitoring in Process-Aware Information Systems:
State of the Art, Functionalities, Research Directions [0.0]
Business process compliance is a key area of business process management.
Process compliance can be checked during process design time based on verification of process models.
For existing compliance monitoring approaches it remains unclear whether and how compliance violations can be predicted.
arXiv Detail & Related papers (2022-05-10T13:38:56Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Automated Sustainability Compliance Checking Using Process Mining and
Formal Logic [0.0]
I want to contribute to the application of compliance checking techniques for the purpose of sustainability compliance.
I want to analyse and develop data-driven approaches, which allow to automate the task of compliance checking.
arXiv Detail & Related papers (2020-06-10T11:07:57Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.