Federated Learning of Socially Appropriate Agent Behaviours in Simulated
Home Environments
- URL: http://arxiv.org/abs/2403.07586v1
- Date: Tue, 12 Mar 2024 12:16:40 GMT
- Title: Federated Learning of Socially Appropriate Agent Behaviours in Simulated
Home Environments
- Authors: Saksham Checker and Nikhil Churamani and Hatice Gunes
- Abstract summary: Social robots are increasingly integrated into daily life, ensuring their behaviours align with social norms is crucial.
It is important to explore Federated Learning (FL) settings where individual robots can learn about their unique environments.
We present a novel FL benchmark that evaluates different strategies, using multi-label regression objectives.
- Score: 6.284099600214928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As social robots become increasingly integrated into daily life, ensuring
their behaviours align with social norms is crucial. For their widespread
open-world application, it is important to explore Federated Learning (FL)
settings where individual robots can learn about their unique environments
while also learning from each others' experiences. In this paper, we present a
novel FL benchmark that evaluates different strategies, using multi-label
regression objectives, where each client individually learns to predict the
social appropriateness of different robot actions while also sharing their
learning with others. Furthermore, splitting the training data by different
contexts such that each client incrementally learns across contexts, we present
a novel Federated Continual Learning (FCL) benchmark that adapts FL-based
methods to use state-of-the-art Continual Learning (CL) methods to continually
learn socially appropriate agent behaviours under different contextual
settings. Federated Averaging (FedAvg) of weights emerges as a robust FL
strategy while rehearsal-based FCL enables incrementally learning the social
appropriateness of robot actions, across contextual splits.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Feature Aggregation with Latent Generative Replay for Federated Continual Learning of Socially Appropriate Robot Behaviours [6.456043270889434]
This work explores a simulated living room environment where robots need to learn the social appropriateness of their actions.
We propose Federated Root (FedRoot), a novel weight aggregation strategy which disentangles feature learning across clients.
We present a novel FL benchmark for learning the social appropriateness of different robot actions in diverse social configurations.
arXiv Detail & Related papers (2024-03-16T07:34:33Z) - Evaluating and Improving Continual Learning in Spoken Language
Understanding [58.723320551761525]
We propose an evaluation methodology that provides a unified evaluation on stability, plasticity, and generalizability in continual learning.
By employing the proposed metric, we demonstrate how introducing various knowledge distillations can improve different aspects of these three properties of the SLU model.
arXiv Detail & Related papers (2024-02-16T03:30:27Z) - Robot Fleet Learning via Policy Merging [58.5086287737653]
We propose FLEET-MERGE to efficiently merge policies in the fleet setting.
We show that FLEET-MERGE consolidates the behavior of policies trained on 50 tasks in the Meta-World environment.
We introduce a novel robotic tool-use benchmark, FLEET-TOOLS, for fleet policy learning in compositional and contact-rich robot manipulation tasks.
arXiv Detail & Related papers (2023-10-02T17:23:51Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Social learning spontaneously emerges by searching optimal heuristics
with deep reinforcement learning [0.0]
We employ a deep reinforcement learning model to optimize the social learning strategies of agents in a cooperative game in a multi-dimensional landscape.
We find that the agent spontaneously learns various concepts of social learning, such as copying, focusing on frequent and well-performing neighbors, self-comparison, and the importance of balancing between individual and social learning.
We demonstrate the superior performance of the reinforcement learning agent in various environments, including temporally changing environments and real social networks.
arXiv Detail & Related papers (2022-04-26T15:10:27Z) - Learning from Heterogeneous Data Based on Social Interactions over
Graphs [58.34060409467834]
This work proposes a decentralized architecture, where individual agents aim at solving a classification problem while observing streaming features of different dimensions.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
arXiv Detail & Related papers (2021-12-17T12:47:18Z) - Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning [65.06445195580622]
Federated learning is a new paradigm that decouples data collection and model training via multi-party computation and model aggregation.
We conduct a focused survey of federated learning in conjunction with other learning algorithms.
arXiv Detail & Related papers (2021-02-25T15:18:13Z) - Connections between Relational Event Model and Inverse Reinforcement
Learning for Characterizing Group Interaction Sequences [0.18275108630751835]
We explore previously unidentified connections between relational event model (REM) and inverse reinforcement learning (IRL)
REM is a conventional approach to tackle such a problem whereas the application of IRL is a largely unbeaten path.
We demonstrate the special utility of IRL in characterizing group social interactions with an empirical experiment.
arXiv Detail & Related papers (2020-10-19T19:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.