Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers
- URL: http://arxiv.org/abs/2209.03499v3
- Date: Fri, 29 Mar 2024 20:22:00 GMT
- Title: Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers
- Authors: Behnam Mohammadi, Nikhil Malik, Tim Derdenger, Kannan Srinivasan,
- Abstract summary: Common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare.
Our paper challenges this notion through a game theoretic model of a policy-maker who maximizes social welfare.
We study the notion of XAI fairness, which may be impossible to guarantee even under mandatory XAI.
- Score: 3.989227271669354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent AI algorithms are black box models whose decisions are difficult to interpret. eXplainable AI (XAI) is a class of methods that seek to address lack of AI interpretability and trust by explaining to customers their AI decisions. The common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare. Our paper challenges this notion through a game theoretic model of a policy-maker who maximizes social welfare, firms in a duopoly competition that maximize profits, and heterogenous consumers. The results show that XAI regulation may be redundant. In fact, mandating fully transparent XAI may make firms and consumers worse off. This reveals a tradeoff between maximizing welfare and receiving explainable AI outputs. We extend the existing literature on method and substantive fronts, and we introduce and study the notion of XAI fairness, which may be impossible to guarantee even under mandatory XAI. Finally, the regulatory and managerial implications of our results for policy-makers and businesses are discussed, respectively.
Related papers
- Dataset | Mindset = Explainable AI | Interpretable AI [36.001670039529586]
"explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs.
We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset.
We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
arXiv Detail & Related papers (2024-08-22T14:12:53Z) - Investigating the Role of Explainability and AI Literacy in User Compliance [2.8623940003518156]
We find that users' compliance increases with the introduction of XAI but is also affected by AI literacy.
We also find that the relationships between AI literacy XAI and users' compliance are mediated by the users' mental model of AI.
arXiv Detail & Related papers (2024-06-18T14:28:12Z) - Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era [77.174117675196]
XAI is being extended towards Large Language Models (LLMs)
This paper analyzes how XAI can benefit LLMs and AI systems.
We introduce 10 strategies, introducing the key techniques for each and discussing their associated challenges.
arXiv Detail & Related papers (2024-03-13T20:25:27Z) - Explainable AI is Responsible AI: How Explainability Creates Trustworthy
and Socially Responsible Artificial Intelligence [9.844540637074836]
This is the topic of responsible AI, which emphasizes the need to develop trustworthy AI systems.
XAI has been broadly considered as a building block for responsible AI (RAI)
Our findings lead us to conclude that XAI is an essential foundation for every pillar of RAI.
arXiv Detail & Related papers (2023-12-04T00:54:04Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Why is plausibility surprisingly problematic as an XAI criterion? [38.0428570713717]
We conduct the first critical examination of a common XAI criterion: plausibility.
It measures how convincing the AI explanation is to humans.
We do not recommend using plausibility as a criterion to evaluate or optimize XAI algorithms.
arXiv Detail & Related papers (2023-03-30T20:59:44Z) - Monetizing Explainable AI: A Double-edged Sword [0.0]
explainable artificial intelligence (XAI) aims to provide insights into the logic of algorithmic decision-making.
Despite much research on the topic, consumer-facing applications of XAI remain rare.
We introduce and describe a novel monetization strategy for fusing algorithmic explanations with programmatic advertising via an explanation platform.
arXiv Detail & Related papers (2023-03-27T15:50:41Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.