Algorithmic Governance for Explainability: A Comparative Overview of
Progress and Trends
- URL: http://arxiv.org/abs/2303.00651v1
- Date: Wed, 1 Mar 2023 16:52:50 GMT
- Title: Algorithmic Governance for Explainability: A Comparative Overview of
Progress and Trends
- Authors: Yulu Pi
- Abstract summary: Lack of explainable AI (XAI) brings adverse effects that can cross all economic classes and national borders.
XAI is still in its infancy. Future applications and corresponding regulatory instruments are still dependent on the collaborative engagement of all parties.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The explainability of AI has transformed from a purely technical issue to a
complex issue closely related to algorithmic governance and algorithmic
security. The lack of explainable AI (XAI) brings adverse effects that can
cross all economic classes and national borders. Despite efforts in governance,
technical, and policy exchange have been made in XAI by multiple stakeholders,
including the public sector, enterprises, and international organizations,
respectively. XAI is still in its infancy. Future applications and
corresponding regulatory instruments are still dependent on the collaborative
engagement of all parties.
Related papers
- Democratizing AI Governance: Balancing Expertise and Public Participation [1.0878040851638]
The development and deployment of artificial intelligence (AI) systems, with their profound societal impacts, raise critical challenges for governance.
This article explores the tension between expert-led oversight and democratic participation, analyzing models of participatory and deliberative democracy.
Recommendations are provided for integrating these approaches into a balanced governance model tailored to the European Union.
arXiv Detail & Related papers (2025-01-16T17:47:33Z) - The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR [47.06917254695738]
We present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI.
The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain.
We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject.
arXiv Detail & Related papers (2025-01-09T15:50:02Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Position Paper: Technical Research and Talent is Needed for Effective AI Governance [0.0]
We survey policy documents published by public-sector institutions in the EU, US, and China.
We highlight specific areas of disconnect between the technical requirements necessary for enacting proposed policy actions, and the current technical state of the art.
Our analysis motivates a call for tighter integration of the AI/ML research community within AI governance.
arXiv Detail & Related papers (2024-06-11T06:32:28Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Beyond XAI:Obstacles Towards Responsible AI [0.0]
Methods of explainability and their evaluation strategies present numerous limitations in real-world contexts.
In this paper, we explore these limitations and discuss their implications in a boarder context of responsible AI.
arXiv Detail & Related papers (2023-09-07T11:08:14Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.