Algorithmic Transparency with Strategic Users
- URL: http://arxiv.org/abs/2008.09283v1
- Date: Fri, 21 Aug 2020 03:10:42 GMT
- Title: Algorithmic Transparency with Strategic Users
- Authors: Qiaochu Wang, Yan Huang, Stefanus Jasin, Param Vir Singh
- Abstract summary: We show that even the predictive power of machine learning algorithms may increase if the firm makes them transparent.
We show that, in some cases, even the predictive power of machine learning algorithms may increase if the firm makes them transparent.
- Score: 9.289838852590732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Should firms that apply machine learning algorithms in their decision-making
make their algorithms transparent to the users they affect? Despite growing
calls for algorithmic transparency, most firms have kept their algorithms
opaque, citing potential gaming by users that may negatively affect the
algorithm's predictive power. We develop an analytical model to compare firm
and user surplus with and without algorithmic transparency in the presence of
strategic users and present novel insights. We identify a broad set of
conditions under which making the algorithm transparent benefits the firm. We
show that, in some cases, even the predictive power of machine learning
algorithms may increase if the firm makes them transparent. By contrast, users
may not always be better off under algorithmic transparency. The results hold
even when the predictive power of the opaque algorithm comes largely from
correlational features and the cost for users to improve on them is close to
zero. Overall, our results show that firms should not view manipulation by
users as bad. Rather, they should use algorithmic transparency as a lever to
motivate users to invest in more desirable features.
Related papers
- Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Reputational Algorithm Aversion [0.0]
This paper shows how algorithm aversion arises when the choice to follow an algorithm conveys information about a human's ability.
I develop a model in which workers make forecasts of an uncertain outcome based on their own private information and an algorithm's signal.
arXiv Detail & Related papers (2024-02-23T16:28:55Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - Algorithmic Transparency and Manipulation [0.0]
A series of recent papers raises worries about the manipulative potential of algorithmic transparency.
This paper draws attention to the indifference view of manipulation, which explains better than the vulnerability view.
arXiv Detail & Related papers (2023-11-22T10:09:06Z) - Influence of the algorithm's reliability and transparency in the user's
decision-making process [0.0]
We conduct an online empirical study with 61 participants to find out how the change in transparency and reliability of an algorithm could impact users' decision-making process.
The results indicate that people show at least moderate confidence in the decisions of the algorithm even when the reliability is bad.
arXiv Detail & Related papers (2023-07-13T03:13:49Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Greedy Algorithm almost Dominates in Smoothed Contextual Bandits [100.09904315064372]
Online learning algorithms must balance exploration and exploitation.
We show that a greedy approach almost matches the best possible Bayesian regret rate of any other algorithm.
arXiv Detail & Related papers (2020-05-19T18:11:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.