Strategic Distribution Shift of Interacting Agents via Coupled Gradient
Flows
- URL: http://arxiv.org/abs/2307.01166v3
- Date: Sun, 29 Oct 2023 04:34:39 GMT
- Title: Strategic Distribution Shift of Interacting Agents via Coupled Gradient
Flows
- Authors: Lauren Conger, Franca Hoffmann, Eric Mazumdar, Lillian Ratliff
- Abstract summary: We propose a novel framework for analyzing the dynamics of distribution shift in real-world systems.
We show that our approach captures well-documented forms of distribution shifts like polarization and disparate impacts that simpler models cannot capture.
- Score: 6.064702468344376
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel framework for analyzing the dynamics of distribution shift
in real-world systems that captures the feedback loop between learning
algorithms and the distributions on which they are deployed. Prior work largely
models feedback-induced distribution shift as adversarial or via an overly
simplistic distribution-shift structure. In contrast, we propose a coupled
partial differential equation model that captures fine-grained changes in the
distribution over time by accounting for complex dynamics that arise due to
strategic responses to algorithmic decision-making, non-local endogenous
population interactions, and other exogenous sources of distribution shift. We
consider two common settings in machine learning: cooperative settings with
information asymmetries, and competitive settings where a learner faces
strategic users. For both of these settings, when the algorithm retrains via
gradient descent, we prove asymptotic convergence of the retraining procedure
to a steady-state, both in finite and in infinite dimensions, obtaining
explicit rates in terms of the model parameters. To do so we derive new results
on the convergence of coupled PDEs that extends what is known on multi-species
systems. Empirically, we show that our approach captures well-documented forms
of distribution shifts like polarization and disparate impacts that simpler
models cannot capture.
Related papers
- Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Variational Inference for Continuous-Time Switching Dynamical Systems [29.984955043675157]
We present a model based on an Markov jump process modulating a subordinated diffusion process.
We develop a new continuous-time variational inference algorithm.
We extensively evaluate our algorithm under the model assumption and for real-world examples.
arXiv Detail & Related papers (2021-09-29T15:19:51Z) - The Gradient Convergence Bound of Federated Multi-Agent Reinforcement
Learning with Efficient Communication [20.891460617583302]
The paper considers independent reinforcement learning (IRL) for collaborative decision-making in the paradigm of federated learning (FL)
FL generates excessive communication overheads between agents and a remote central server.
This paper proposes two advanced optimization schemes to improve the system's utility value.
arXiv Detail & Related papers (2021-03-24T07:21:43Z) - Monotonic Alpha-divergence Minimisation for Variational Inference [0.0]
We introduce a novel family of iterative algorithms which carry out $alpha$-divergence minimisation in a Variational Inference context.
They do so by ensuring a systematic decrease at each step in the $alpha$-divergence between the variational and the posterior distributions.
arXiv Detail & Related papers (2021-03-09T19:41:03Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.