User Strategization and Trustworthy Algorithms
- URL: http://arxiv.org/abs/2312.17666v1
- Date: Fri, 29 Dec 2023 16:09:42 GMT
- Title: User Strategization and Trustworthy Algorithms
- Authors: Sarah H. Cen, Andrew Ilyas, Aleksander Madry
- Abstract summary: We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
- Score: 81.82279667028423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many human-facing algorithms -- including those that power recommender
systems or hiring decision tools -- are trained on data provided by their
users. The developers of these algorithms commonly adopt the assumption that
the data generating process is exogenous: that is, how a user reacts to a given
prompt (e.g., a recommendation or hiring suggestion) depends on the prompt and
not on the algorithm that generated it. For example, the assumption that a
person's behavior follows a ground-truth distribution is an exogeneity
assumption. In practice, when algorithms interact with humans, this assumption
rarely holds because users can be strategic. Recent studies document, for
example, TikTok users changing their scrolling behavior after learning that
TikTok uses it to curate their feed, and Uber drivers changing how they accept
and cancel rides in response to changes in Uber's algorithm.
Our work studies the implications of this strategic behavior by modeling the
interactions between a user and their data-driven platform as a repeated,
two-player game. We first find that user strategization can actually help
platforms in the short term. We then show that it corrupts platforms' data and
ultimately hurts their ability to make counterfactual decisions. We connect
this phenomenon to user trust, and show that designing trustworthy algorithms
can go hand in hand with accurate estimation. Finally, we provide a
formalization of trustworthiness that inspires potential interventions.
Related papers
- Algorithmic Drift: A Simulation Framework to Study the Effects of Recommender Systems on User Preferences [7.552217586057245]
We propose a simulation framework that mimics user-recommender system interactions in a long-term scenario.
We introduce two novel metrics for quantifying the algorithm's impact on user preferences, specifically in terms of drift over time.
arXiv Detail & Related papers (2024-09-24T21:54:22Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - Exploring Gender Disparities in Bumble's Match Recommendations [0.27309692684728604]
We study bias and discrimination in the context of Bumble, an online dating platform in India.
We conduct an experiment to identify and address the presence of bias in the matching algorithms Bumble pushes to its users.
arXiv Detail & Related papers (2023-12-15T09:09:42Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Strategic Representation [20.43010800051863]
strategic machines might craft representations that manipulate their users.
We formalize this as a learning problem, and pursue algorithms for decision-making that are robust to manipulation.
Our main result is a learning algorithm that minimizes error despite strategic representations.
arXiv Detail & Related papers (2022-06-17T04:20:57Z) - Sayer: Using Implicit Feedback to Optimize System Policies [63.992191765269396]
We develop a methodology that leverages implicit feedback to evaluate and train new system policies.
Sayer builds on two ideas from reinforcement learning to leverage data collected by an existing policy.
We show that Sayer can evaluate arbitrary policies accurately, and train new policies that outperform the production policies.
arXiv Detail & Related papers (2021-10-28T04:16:56Z) - Learning User Preferences in Non-Stationary Environments [42.785926822853746]
We introduce a novel model for online non-stationary recommendation systems.
We show that our algorithm outperforms other static algorithms even when preferences do not change over time.
arXiv Detail & Related papers (2021-01-29T10:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.