Collaborative Algorithms for Online Personalized Mean Estimation
- URL: http://arxiv.org/abs/2208.11530v1
- Date: Wed, 24 Aug 2022 13:23:26 GMT
- Title: Collaborative Algorithms for Online Personalized Mean Estimation
- Authors: Mahsa Asadi, Aur\'elien Bellet, Odalric-Ambrym Maillard, Marc Tommasi
- Abstract summary: We study the case where some distributions have the same mean, and the agents are allowed to actively query information from other agents.
The goal is to design an algorithm that enables each agent to improve its mean estimate thanks to communication with other agents.
We introduce a novel collaborative strategy to solve this online personalized mean estimation problem.
- Score: 12.875154616215305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider an online estimation problem involving a set of agents. Each
agent has access to a (personal) process that generates samples from a
real-valued distribution and seeks to estimate its mean. We study the case
where some of the distributions have the same mean, and the agents are allowed
to actively query information from other agents. The goal is to design an
algorithm that enables each agent to improve its mean estimate thanks to
communication with other agents. The means as well as the number of
distributions with same mean are unknown, which makes the task nontrivial. We
introduce a novel collaborative strategy to solve this online personalized mean
estimation problem. We analyze its time complexity and introduce variants that
enjoy good performance in numerical experiments. We also extend our approach to
the setting where clusters of agents with similar means seek to estimate the
mean of their cluster.
Related papers
- Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation [59.01527054553122]
Decentralised agents can learn equilibria in Mean-Field Games from a single, non-episodic run of the empirical system.
We introduce function approximation to the existing setting, drawing on the Munchausen Online Mirror Descent method.
We additionally provide new algorithms that allow agents to estimate the global empirical distribution based on a local neighbourhood.
arXiv Detail & Related papers (2024-08-21T13:32:46Z) - Scalable Decentralized Algorithms for Online Personalized Mean Estimation [12.002609934938224]
This study focuses on a simplified version of the overarching problem, where each agent collects samples from a real-valued distribution over time to estimate its mean.
We introduce two collaborative mean estimation algorithms: one draws inspiration from belief propagation, while the other employs a consensus-based approach.
arXiv Detail & Related papers (2024-02-20T08:30:46Z) - Adaptive Crowdsourcing Via Self-Supervised Learning [20.393114559367202]
Common crowdsourcing systems average estimates of a latent quantity of interest provided by many crowdworkers to produce a group estimate.
We develop a new approach -- predict-each-worker -- that leverages self-supervised learning and a novel aggregation scheme.
arXiv Detail & Related papers (2024-01-24T05:57:36Z) - Distributed Bayesian Estimation in Sensor Networks: Consensus on
Marginal Densities [15.038649101409804]
We derive a distributed provably-correct algorithm in the functional space of probability distributions over continuous variables.
We leverage these results to obtain new distributed estimators restricted to subsets of variables observed by individual agents.
This relates to applications such as cooperative localization and federated learning, where the data collected at any agent depends on a subset of all variables of interest.
arXiv Detail & Related papers (2023-12-02T21:10:06Z) - Robust Online and Distributed Mean Estimation Under Adversarial Data
Corruption [1.9199742103141069]
We study robust mean estimation in an online and distributed scenario in the presence of adversarial data attacks.
We provide the error-bound and the convergence properties of the estimates to incorporate the true mean under our algorithms.
arXiv Detail & Related papers (2022-09-17T16:36:21Z) - Byzantine-Robust Online and Offline Distributed Reinforcement Learning [60.970950468309056]
We consider a distributed reinforcement learning setting where multiple agents explore the environment and communicate their experiences through a central server.
$alpha$-fraction of agents are adversarial and can report arbitrary fake information.
We seek to identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents.
arXiv Detail & Related papers (2022-06-01T00:44:53Z) - Optimal Clustering with Bandit Feedback [57.672609011609886]
This paper considers the problem of online clustering with bandit feedback.
It includes a novel stopping rule for sequential testing that circumvents the need to solve any NP-hard weighted clustering problem as its subroutines.
We show through extensive simulations on synthetic and real-world datasets that BOC's performance matches the lower boundally, and significantly outperforms a non-adaptive baseline algorithm.
arXiv Detail & Related papers (2022-02-09T06:05:05Z) - Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised
Person Re-Identification and Text Authorship Attribution [77.85461690214551]
Learning from fully-unlabeled data is challenging in Multimedia Forensics problems, such as Person Re-Identification and Text Authorship Attribution.
Recent self-supervised learning methods have shown to be effective when dealing with fully-unlabeled data in cases where the underlying classes have significant semantic differences.
We propose a strategy to tackle Person Re-Identification and Text Authorship Attribution by enabling learning from unlabeled data even when samples from different classes are not prominently diverse.
arXiv Detail & Related papers (2022-02-07T13:08:11Z) - Explaining Reinforcement Learning Policies through Counterfactual
Trajectories [147.7246109100945]
A human developer must validate that an RL agent will perform well at test-time.
Our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution.
In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks.
arXiv Detail & Related papers (2022-01-29T00:52:37Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.