Exploring the Relevance of Data Privacy-Enhancing Technologies for AI
Governance Use Cases
- URL: http://arxiv.org/abs/2303.08956v2
- Date: Mon, 20 Mar 2023 23:39:09 GMT
- Title: Exploring the Relevance of Data Privacy-Enhancing Technologies for AI
Governance Use Cases
- Authors: Emma Bluemke, Tantum Collins, Ben Garfinkel, Andrew Trask
- Abstract summary: It is useful to view different AI governance objectives as a system of information flows.
The importance of interoperability between these different AI governance solutions becomes clear.
- Score: 1.5293427903448022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of privacy-enhancing technologies has made immense progress
in reducing trade-offs between privacy and performance in data exchange and
analysis. Similar tools for structured transparency could be useful for AI
governance by offering capabilities such as external scrutiny, auditing, and
source verification. It is useful to view these different AI governance
objectives as a system of information flows in order to avoid partial solutions
and significant gaps in governance, as there may be significant overlap in the
software stacks needed for the AI governance use cases mentioned in this text.
When viewing the system as a whole, the importance of interoperability between
these different AI governance solutions becomes clear. Therefore, it is
imminently important to look at these problems in AI governance as a system,
before these standards, auditing procedures, software, and norms settle into
place.
Related papers
- Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Data-Centric Governance [6.85316573653194]
Current AI governance approaches consist mainly of manual review and documentation processes.
Modern AI systems are data-centric: they act on data, produce data, and are built through data engineering.
This work explores the systematization of governance requirements via datasets and algorithmic evaluations.
arXiv Detail & Related papers (2023-02-14T07:22:32Z) - Artificial Intelligence in Governance, Risk and Compliance: Results of a study on potentials for the application of artificial intelligence (AI) in governance, risk and compliance (GRC) [0.0]
GRC (Governance, Risk and Compliance) means an integrated governance-approach.
Governance functions are interlinked and not separated from each other.
Artificial intelligence is being used in GRC for processing and analysis of unstructured data sets.
arXiv Detail & Related papers (2022-12-07T12:36:10Z) - Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance [14.043062659347427]
Laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them.
We propose a novel stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems.
arXiv Detail & Related papers (2022-06-10T09:39:00Z) - AI Assurance using Causal Inference: Application to Public Policy [0.0]
Most AI approaches can only be represented as "black boxes" and suffer from the lack of transparency.
It is crucial not only to develop effective and robust AI systems, but to make sure their internal processes are explainable and fair.
arXiv Detail & Related papers (2021-12-01T16:03:06Z) - Trustworthy Artificial Intelligence and Process Mining: Challenges and
Opportunities [0.8602553195689513]
We show that process mining can provide a useful framework for gaining fact-based visibility to AI compliance process execution.
We provide for an automated approach to analyze, remediate and monitor uncertainty in AI regulatory compliance processes.
arXiv Detail & Related papers (2021-10-06T12:50:47Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.