Algorithmic Governance for Explainability: A Comparative Overview of
Progress and Trends
- URL: http://arxiv.org/abs/2303.00651v1
- Date: Wed, 1 Mar 2023 16:52:50 GMT
- Title: Algorithmic Governance for Explainability: A Comparative Overview of
Progress and Trends
- Authors: Yulu Pi
- Abstract summary: Lack of explainable AI (XAI) brings adverse effects that can cross all economic classes and national borders.
XAI is still in its infancy. Future applications and corresponding regulatory instruments are still dependent on the collaborative engagement of all parties.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The explainability of AI has transformed from a purely technical issue to a
complex issue closely related to algorithmic governance and algorithmic
security. The lack of explainable AI (XAI) brings adverse effects that can
cross all economic classes and national borders. Despite efforts in governance,
technical, and policy exchange have been made in XAI by multiple stakeholders,
including the public sector, enterprises, and international organizations,
respectively. XAI is still in its infancy. Future applications and
corresponding regulatory instruments are still dependent on the collaborative
engagement of all parties.
Related papers
- Unlocking the Black Box: Analysing the EU Artificial Intelligence Act's Framework for Explainability in AI [0.0]
The need for eXplainable AI (XAI) is evident in fields such as healthcare, credit scoring, policing and the criminal justice system.
At the EU level, the notion of explainability is one of the fundamental principles that underpin the AI Act.
This paper explores various approaches and techniques that promise to advance XAI, as well as the challenges of implementing the principle of explainability in AI governance and policies.
arXiv Detail & Related papers (2025-01-24T16:30:19Z) - Democratizing AI Governance: Balancing Expertise and Public Participation [1.0878040851638]
The development and deployment of artificial intelligence (AI) systems, with their profound societal impacts, raise critical challenges for governance.
This article explores the tension between expert-led oversight and democratic participation, analyzing models of participatory and deliberative democracy.
Recommendations are provided for integrating these approaches into a balanced governance model tailored to the European Union.
arXiv Detail & Related papers (2025-01-16T17:47:33Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Position Paper: Technical Research and Talent is Needed for Effective AI Governance [0.0]
We survey policy documents published by public-sector institutions in the EU, US, and China.
We highlight specific areas of disconnect between the technical requirements necessary for enacting proposed policy actions, and the current technical state of the art.
Our analysis motivates a call for tighter integration of the AI/ML research community within AI governance.
arXiv Detail & Related papers (2024-06-11T06:32:28Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - How VADER is your AI? Towards a definition of artificial intelligence systems appropriate for regulation [39.58317527488534]
Recent AI regulation proposals adopt AI definitions affecting ICT techniques, approaches, and systems that are not AI.
We propose a framework to score how validated as appropriately-defined for regulation (VADER) an AI definition is.
arXiv Detail & Related papers (2024-02-07T17:41:15Z) - Beyond XAI:Obstacles Towards Responsible AI [0.0]
Methods of explainability and their evaluation strategies present numerous limitations in real-world contexts.
In this paper, we explore these limitations and discuss their implications in a boarder context of responsible AI.
arXiv Detail & Related papers (2023-09-07T11:08:14Z) - Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers [3.989227271669354]
Common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare.
Our paper challenges this notion through a game theoretic model of a policy-maker who maximizes social welfare.
We study the notion of XAI fairness, which may be impossible to guarantee even under mandatory XAI.
arXiv Detail & Related papers (2022-09-07T23:36:11Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.