Strategic Representation
- URL: http://arxiv.org/abs/2206.08542v1
- Date: Fri, 17 Jun 2022 04:20:57 GMT
- Title: Strategic Representation
- Authors: Vineet Nair, Ganesh Ghalme, Inbal Talgam-Cohen, Nir Rosenfeld
- Abstract summary: strategic machines might craft representations that manipulate their users.
We formalize this as a learning problem, and pursue algorithms for decision-making that are robust to manipulation.
Our main result is a learning algorithm that minimizes error despite strategic representations.
- Score: 20.43010800051863
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans have come to rely on machines for reducing excessive information to
manageable representations. But this reliance can be abused -- strategic
machines might craft representations that manipulate their users. How can a
user make good choices based on strategic representations? We formalize this as
a learning problem, and pursue algorithms for decision-making that are robust
to manipulation. In our main setting of interest, the system represents
attributes of an item to the user, who then decides whether or not to consume.
We model this interaction through the lens of strategic classification (Hardt
et al. 2016), reversed: the user, who learns, plays first; and the system,
which responds, plays second. The system must respond with representations that
reveal `nothing but the truth' but need not reveal the entire truth. Thus, the
user faces the problem of learning set functions under strategic subset
selection, which presents distinct algorithmic and statistical challenges. Our
main result is a learning algorithm that minimizes error despite strategic
representations, and our theoretical analysis sheds light on the trade-off
between learning effort and susceptibility to manipulation.
Related papers
- The Double-Edged Sword of Behavioral Responses in Strategic Classification: Theory and User Studies [7.695481260089599]
We propose a strategic classification model that considers behavioral biases in human responses to algorithms.
We show how misperceptions of a classifier can lead to different types of discrepancies between biased and rational agents' responses.
We show that strategic agents with behavioral biases can benefit or (perhaps, unexpectedly) harm the firm compared to fully rational strategic agents.
arXiv Detail & Related papers (2024-10-23T17:42:54Z) - Capturing the Complexity of Human Strategic Decision-Making with Machine Learning [4.308322597847064]
We conduct the largest study to date of strategic decision-making in the context of initial play in two-player matrix games.
We show that a deep neural network trained on these data predicts people's choices better than leading theories of strategic behavior.
arXiv Detail & Related papers (2024-08-15T00:39:42Z) - Strategic Littlestone Dimension: Improved Bounds on Online Strategic Classification [22.031509365704423]
We study the problem of online binary classification in settings where strategic agents can modify their observable features to receive a positive classification.
We introduce the Strategic Littlestone Dimension, a new measure that captures the joint complexity of the hypothesis class and the manipulation graph.
We derive regret bounds in both the realizable setting where all agents manipulate according to the same graph within the graph family, and the agnostic setting where the manipulation graphs are chosen adversarially and not consistently modeled by a single graph in the family.
arXiv Detail & Related papers (2024-07-16T11:31:20Z) - Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Ideal Abstractions for Decision-Focused Learning [108.15241246054515]
We propose a method that configures the output space automatically in order to minimize the loss of decision-relevant information.
We demonstrate the method in two domains: data acquisition for deep neural network training and a closed-loop wildfire management task.
arXiv Detail & Related papers (2023-03-29T23:31:32Z) - Learning Losses for Strategic Classification [5.812499828391904]
We take a learning theoretic perspective, focusing on the sample complexity needed to learn a good decision rule.
We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph.
Using techniques from transfer learning theory, we define a similarity measure for manipulation graphs and show that learning outcomes are robust with respect to small changes in the manipulation graph.
arXiv Detail & Related papers (2022-03-25T02:26:16Z) - Who Leads and Who Follows in Strategic Classification? [82.44386576129295]
We argue that the order of play in strategic classification is fundamentally determined by the relative frequencies at which the decision-maker and the agents adapt to each other's actions.
We show that a decision-maker with the freedom to choose their update frequency can induce learning dynamics that converge to Stackelberg equilibria with either order of play.
arXiv Detail & Related papers (2021-06-23T16:48:46Z) - Model-free Representation Learning and Exploration in Low-rank MDPs [64.72023662543363]
We present the first model-free representation learning algorithms for low rank MDPs.
Key algorithmic contribution is a new minimax representation learning objective.
Result can accommodate general function approximation to scale to complex environments.
arXiv Detail & Related papers (2021-02-14T00:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.