A decision-theoretic model for a principal-agent collaborative learning problem
- URL: http://arxiv.org/abs/2409.16068v1
- Date: Tue, 24 Sep 2024 13:08:51 GMT
- Title: A decision-theoretic model for a principal-agent collaborative learning problem
- Authors: Getachew K Befekadu,
- Abstract summary: We consider a collaborative learning framework with principal-agent setting in which the principal determines a set of appropriate aggregation coefficients.
The proposed framework offers some advantages in terms of stability and generalization, despite that both the principal and the agents do not necessarily need to have any knowledge of the sample distributions or the quality of each others' datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this technical note, we consider a collaborative learning framework with principal-agent setting, in which the principal at each time-step determines a set of appropriate aggregation coefficients based on how the current parameter estimates from a group of $K$ agents effectively performed in connection with a separate test dataset, which is not part of the agents' training model datasets. Whereas, the agents, who act together as a team, then update their parameter estimates using a discrete-time version of Langevin dynamics with mean-field-like interaction term, but guided by their respective different training model datasets. Here, we propose a decision-theoretic framework that explicitly describes how the principal progressively determines a set of nonnegative and sum to one aggregation coefficients used by the agents in their mean-field-like interaction term, that eventually leading them to reach a consensus optimal parameter estimate. Interestingly, due to the inherent feedbacks and cooperative behavior among the agents, the proposed framework offers some advantages in terms of stability and generalization, despite that both the principal and the agents do not necessarily need to have any knowledge of the sample distributions or the quality of each others' datasets.
Related papers
- Causal Influence in Federated Edge Inference [34.487472866247586]
In this paper, we consider a setting where heterogeneous agents with connectivity are performing inference using unlabeled streaming data.
In order to overcome the uncertainty, agents cooperate with each other by exchanging their local inferences with and through a fusion center.
Various scenarios reflecting different agent participation patterns and fusion center policies are investigated.
arXiv Detail & Related papers (2024-05-02T13:06:50Z) - Using representation balancing to learn conditional-average dose responses from clustered data [5.633848204699653]
Estimating a unit's responses to interventions with an associated dose is relevant in a variety of domains.
We show the impacts of clustered data on model performance and propose an estimator, CBRNet.
arXiv Detail & Related papers (2023-09-07T14:17:44Z) - Building a Winning Team: Selecting Source Model Ensembles using a
Submodular Transferability Estimation Approach [20.86345962679122]
Estimating the transferability of publicly available pretrained models to a target task has assumed an important place for transfer learning tasks.
We propose a novel Optimal tranSport-based suBmOdular tRaNsferability metric (OSBORN) to estimate the transferability of an ensemble of models to a downstream task.
arXiv Detail & Related papers (2023-09-05T17:57:31Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - A General Framework for Distributed Inference with Uncertain Models [14.8884251609335]
We study the problem of distributed classification with a network of heterogeneous agents.
We build upon the concept of uncertain models to incorporate the agents' uncertainty in the likelihoods.
arXiv Detail & Related papers (2020-11-20T22:17:12Z) - Combining Task Predictors via Enhancing Joint Predictability [53.46348489300652]
We present a new predictor combination algorithm that improves the target by i) measuring the relevance of references based on their capabilities in predicting the target, and ii) strengthening such estimated relevance.
Our algorithm jointly assesses the relevance of all references by adopting a Bayesian framework.
Based on experiments on seven real-world datasets from visual attribute ranking and multi-class classification scenarios, we demonstrate that our algorithm offers a significant performance gain and broadens the application range of existing predictor combination approaches.
arXiv Detail & Related papers (2020-07-15T21:58:39Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.