Cluster-Based Social Reinforcement Learning
- URL: http://arxiv.org/abs/2003.00627v2
- Date: Mon, 23 Mar 2020 18:46:08 GMT
- Title: Cluster-Based Social Reinforcement Learning
- Authors: Mahak Goindani, Jennifer Neville
- Abstract summary: Social Reinforcement Learning methods are useful for fake news mitigation, personalized teaching/healthcare, and viral marketing.
It is challenging to incorporate inter-agent dependencies into the models effectively due to network size and sparse interaction data.
Previous social RL approaches either ignore agents dependencies or model them in a computationally intensive manner.
- Score: 16.821802372973004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social Reinforcement Learning methods, which model agents in large networks,
are useful for fake news mitigation, personalized teaching/healthcare, and
viral marketing, but it is challenging to incorporate inter-agent dependencies
into the models effectively due to network size and sparse interaction data.
Previous social RL approaches either ignore agents dependencies or model them
in a computationally intensive manner. In this work, we incorporate agent
dependencies efficiently in a compact model by clustering users (based on their
payoff and contribution to the goal) and combine this with a method to easily
derive personalized agent-level policies from cluster-level policies. We also
propose a dynamic clustering approach that captures changing user behavior.
Experiments on real-world datasets illustrate that our proposed approach learns
more accurate policy estimates and converges more quickly, compared to several
baselines that do not use agent correlations or only use static clusters.
Related papers
- Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning [50.382793324572845]
Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy.
In this paper, we analyze a new method that incorporates the ideas of using data similarity and clients sampling.
To address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method.
arXiv Detail & Related papers (2024-09-22T00:49:10Z) - Causal Coordinated Concurrent Reinforcement Learning [8.654978787096807]
We propose a novel algorithmic framework for data sharing and coordinated exploration for the purpose of learning more data-efficient and better performing policies under a concurrent reinforcement learning setting.
Our algorithm leverages a causal inference algorithm in the form of Additive Noise Model - Mixture Model (ANM-MM) in extracting model parameters governing individual differentials via independence enforcement.
We propose a new data sharing scheme based on a similarity measure of the extracted model parameters and demonstrate superior learning speeds on a set of autoregressive, pendulum and cart-pole swing-up tasks.
arXiv Detail & Related papers (2024-01-31T17:20:28Z) - Dynamic Clustering and Cluster Contrastive Learning for Unsupervised
Person Re-identification [29.167783500369442]
Unsupervised Re-ID methods aim at learning robust and discriminative features from unlabeled data.
We propose a dynamic clustering and cluster contrastive learning (DCCC) method.
Experiments on several widely used public datasets validate the effectiveness of our proposed DCCC.
arXiv Detail & Related papers (2023-03-13T01:56:53Z) - Fully Decentralized Model-based Policy Optimization for Networked
Systems [23.46407780093797]
This work aims to improve data efficiency of multi-agent control by model-based learning.
We consider networked systems where agents are cooperative and communicate only locally with their neighbors.
In our method, each agent learns a dynamic model to predict future states and broadcast their predictions by communication, and then the policies are trained under the model rollouts.
arXiv Detail & Related papers (2022-07-13T23:52:14Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Personalized Federated Learning with Multiple Known Clusters [20.585114235701603]
We consider the problem of personalized federated learning when there are known cluster structures within users.
An intuitive approach would be to regularize the parameters so that users in the same cluster share similar model weights.
We develop an algorithm that allows each cluster to communicate independently and derive the convergence results.
arXiv Detail & Related papers (2022-04-28T16:32:29Z) - Robust and Efficient Aggregation for Distributed Learning [37.203175053625245]
Distributed learning schemes based on averaging are known to be susceptible to outliers.
A single malicious agent is able to drive an averaging-based distributed learning algorithm to an arbitrarily poor model.
This has motivated the development of robust aggregation schemes, which are based on variations of the median and trimmed mean.
arXiv Detail & Related papers (2022-04-01T17:17:41Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Deep Interactive Bayesian Reinforcement Learning via Meta-Learning [63.96201773395921]
The optimal adaptive behaviour under uncertainty over the other agents' strategies can be computed using the Interactive Bayesian Reinforcement Learning framework.
We propose to meta-learn approximate belief inference and Bayes-optimal behaviour for a given prior.
We show empirically that our approach outperforms existing methods that use a model-free approach, sample from the approximate posterior, maintain memory-free models of others, or do not fully utilise the known structure of the environment.
arXiv Detail & Related papers (2021-01-11T13:25:13Z) - Multi-Agent Interactions Modeling with Correlated Policies [53.38338964628494]
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework.
We develop a Decentralized Adrial Imitation Learning algorithm with Correlated policies (CoDAIL)
Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators.
arXiv Detail & Related papers (2020-01-04T17:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.