MARLeME: A Multi-Agent Reinforcement Learning Model Extraction Library
- URL: http://arxiv.org/abs/2004.07928v1
- Date: Thu, 16 Apr 2020 20:27:38 GMT
- Title: MARLeME: A Multi-Agent Reinforcement Learning Model Extraction Library
- Authors: Dmitry Kazhdan, Zohreh Shams, Pietro Li\`o
- Abstract summary: Symbolic models offer a high degree of interpretability, well-defined properties, and verifiable behaviour.
They can be used to inspect and better understand the underlying MARL system and corresponding MARL agents.
- Score: 0.43830114853179497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-Agent Reinforcement Learning (MARL) encompasses a powerful class of
methodologies that have been applied in a wide range of fields. An effective
way to further empower these methodologies is to develop libraries and tools
that could expand their interpretability and explainability. In this work, we
introduce MARLeME: a MARL model extraction library, designed to improve
explainability of MARL systems by approximating them with symbolic models.
Symbolic models offer a high degree of interpretability, well-defined
properties, and verifiable behaviour. Consequently, they can be used to inspect
and better understand the underlying MARL system and corresponding MARL agents,
as well as to replace all/some of the agents that are particularly safety and
security critical.
Related papers
- Multi-agent Reinforcement Learning for Dynamic Dispatching in Material Handling Systems [5.050348337816326]
This paper proposes a multi-agent reinforcement learning (MARL) approach to learn dynamic dispatching strategies.
To benchmark our method, we developed a material handling environment that reflects the complexities of an actual system.
arXiv Detail & Related papers (2024-09-27T03:57:54Z) - LLAVADI: What Matters For Multimodal Large Language Models Distillation [77.73964744238519]
In this work, we do not propose a new efficient model structure or train small-scale MLLMs from scratch.
Our studies involve training strategies, model choices, and distillation algorithms in the knowledge distillation process.
By evaluating different benchmarks and proper strategy, even a 2.7B small-scale model can perform on par with larger models with 7B or 13B parameters.
arXiv Detail & Related papers (2024-07-28T06:10:47Z) - Representation Learning For Efficient Deep Multi-Agent Reinforcement Learning [10.186029242664931]
We present MAPO-LSO which applies a form of comprehensive representation learning devised to supplement MARL training.
Specifically, MAPO-LSO proposes a multi-agent extension of transition dynamics reconstruction and self-predictive learning.
Empirical results demonstrate MAPO-LSO to show notable improvements in sample efficiency and learning performance compared to its vanilla MARL counterpart.
arXiv Detail & Related papers (2024-06-05T03:11:44Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Knowledge Fusion of Large Language Models [73.28202188100646]
This paper introduces the notion of knowledge fusion for large language models (LLMs)
We externalize their collective knowledge and unique strengths, thereby elevating the capabilities of the target model beyond those of any individual source LLM.
Our findings confirm that the fusion of LLMs can improve the performance of the target model across a range of capabilities such as reasoning, commonsense, and code generation.
arXiv Detail & Related papers (2024-01-19T05:02:46Z) - ESP: Exploiting Symmetry Prior for Multi-Agent Reinforcement Learning [22.733348449818838]
Multi-agent reinforcement learning (MARL) has achieved promising results in recent years.
This paper proposes a framework for exploiting prior knowledge by integrating data augmentation and a well-designed consistency loss.
arXiv Detail & Related papers (2023-07-30T09:49:05Z) - MA2CL:Masked Attentive Contrastive Learning for Multi-Agent
Reinforcement Learning [128.19212716007794]
We propose an effective framework called textbfMulti-textbfAgent textbfMasked textbfAttentive textbfContrastive textbfLearning (MA2CL)
MA2CL encourages learning representation to be both temporal and agent-level predictive by reconstructing the masked agent observation in latent space.
Our method significantly improves the performance and sample efficiency of different MARL algorithms and outperforms other methods in various vision-based and state-based scenarios.
arXiv Detail & Related papers (2023-06-03T05:32:19Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - MABL: Bi-Level Latent-Variable World Model for Sample-Efficient
Multi-Agent Reinforcement Learning [43.30657890400801]
We propose a novel model-based MARL algorithm, MABL, that learns a bi-level latent-variable world model from high-dimensional inputs.
For each agent, MABL learns a global latent state at the upper level, which is used to inform the learning of an agent latent state at the lower level.
MaBL surpasses SOTA multi-agent latent-variable world models in both sample efficiency and overall performance.
arXiv Detail & Related papers (2023-04-12T17:46:23Z) - MARLlib: A Scalable and Efficient Multi-agent Reinforcement Learning
Library [82.77446613763809]
We present MARLlib, a library designed to offer fast development for multi-agent tasks and algorithm combinations.
MARLlib can effectively disentangle the intertwined nature of the multi-agent task and the learning process of the algorithm.
The library's source code is publicly accessible on GitHub.
arXiv Detail & Related papers (2022-10-11T03:11:12Z) - Model-based Multi-agent Reinforcement Learning: Recent Progress and
Prospects [23.347535672670688]
Multi-Agent Reinforcement Learning (MARL) tackles sequential decision-making problems involving multiple participants.
MARL requires a tremendous number of samples for effective training.
Model-based methods have been shown to achieve provable advantages of sample efficiency.
arXiv Detail & Related papers (2022-03-20T17:24:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.