Monetizing Explainable AI: A Double-edged Sword
- URL: http://arxiv.org/abs/2304.06483v1
- Date: Mon, 27 Mar 2023 15:50:41 GMT
- Title: Monetizing Explainable AI: A Double-edged Sword
- Authors: Travis Greene, Sofie Goethals, David Martens, Galit Shmueli
- Abstract summary: explainable artificial intelligence (XAI) aims to provide insights into the logic of algorithmic decision-making.
Despite much research on the topic, consumer-facing applications of XAI remain rare.
We introduce and describe a novel monetization strategy for fusing algorithmic explanations with programmatic advertising via an explanation platform.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithms used by organizations increasingly wield power in society as they
decide the allocation of key resources and basic goods. In order to promote
fairer, juster, and more transparent uses of such decision-making power,
explainable artificial intelligence (XAI) aims to provide insights into the
logic of algorithmic decision-making. Despite much research on the topic,
consumer-facing applications of XAI remain rare. A central reason may be that a
viable platform-based monetization strategy for this new technology has yet to
be found. We introduce and describe a novel monetization strategy for fusing
algorithmic explanations with programmatic advertising via an explanation
platform. We claim the explanation platform represents a new,
socially-impactful, and profitable form of human-algorithm interaction and
estimate its potential for revenue generation in the high-risk domains of
finance, hiring, and education. We then consider possible undesirable and
unintended effects of monetizing XAI and simulate these scenarios using
real-world credit lending data. Ultimately, we argue that monetizing XAI may be
a double-edged sword: while monetization may incentivize industry adoption of
XAI in a variety of consumer applications, it may also conflict with the
original legal and ethical justifications for developing XAI. We conclude by
discussing whether there may be ways to responsibly and democratically harness
the potential of monetized XAI to provide greater consumer access to
algorithmic explanations.
Related papers
- A Survey of Explainable Artificial Intelligence (XAI) in Financial Time Series Forecasting [1.2937020918620652]
The field of eXplainable AI (XAI) aims to make AI models more understandable.
This paper categorizes XAI approaches that predict financial time series.
It provides a comprehensive view of XAI's current role in finance.
arXiv Detail & Related papers (2024-07-22T17:06:19Z) - Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era [77.174117675196]
XAI is being extended towards Large Language Models (LLMs)
This paper analyzes how XAI can benefit LLMs and AI systems.
We introduce 10 strategies, introducing the key techniques for each and discussing their associated challenges.
arXiv Detail & Related papers (2024-03-13T20:25:27Z) - A Time Series Approach to Explainability for Neural Nets with
Applications to Risk-Management and Fraud Detection [0.0]
Trust in technology is enabled by understanding the rationale behind the predictions made.
For cross-sectional data classical XAI approaches can lead to valuable insights about the models' inner workings.
We propose a novel XAI technique for deep learning methods which preserves and exploits the natural time ordering of the data.
arXiv Detail & Related papers (2022-12-06T12:04:01Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers [3.989227271669354]
Common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare.
Our paper challenges this notion through a game theoretic model of a policy-maker who maximizes social welfare.
We study the notion of XAI fairness, which may be impossible to guarantee even under mandatory XAI.
arXiv Detail & Related papers (2022-09-07T23:36:11Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainable Artificial Intelligence Approaches: A Survey [0.22940141855172028]
Lack of explainability of a decision from an Artificial Intelligence based "black box" system/model is a key stumbling block for adopting AI in high stakes applications.
We demonstrate popular Explainable Artificial Intelligence (XAI) methods with a mutual case study/task.
We analyze for competitive advantages from multiple perspectives.
We recommend paths towards responsible or human-centered AI using XAI as a medium.
arXiv Detail & Related papers (2021-01-23T06:15:34Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.