Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration
- URL: http://arxiv.org/abs/2211.16385v1
- Date: Tue, 29 Nov 2022 17:10:24 GMT
- Title: Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration
- Authors: Srivatsan Krishnan, Natasha Jaques, Shayegan Omidshafiei, Dan Zhang,
Izzeddin Gur, Vijay Janapa Reddi, Aleksandra Faust
- Abstract summary: Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
- Score: 71.95914457415624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Microprocessor architects are increasingly resorting to domain-specific
customization in the quest for high-performance and energy-efficiency. As the
systems grow in complexity, fine-tuning architectural parameters across
multiple sub-systems (e.g., datapath, memory blocks in different hierarchies,
interconnects, compiler optimization, etc.) quickly results in a combinatorial
explosion of design space. This makes domain-specific customization an
extremely challenging task. Prior work explores using reinforcement learning
(RL) and other optimization methods to automatically explore the large design
space. However, these methods have traditionally relied on single-agent RL/ML
formulations. It is unclear how scalable single-agent formulations are as we
increase the complexity of the design space (e.g., full stack System-on-Chip
design). Therefore, we propose an alternative formulation that leverages
Multi-Agent RL (MARL) to tackle this problem. The key idea behind using MARL is
an observation that parameters across different sub-systems are more or less
independent, thus allowing a decentralized role assigned to each agent. We test
this hypothesis by designing domain-specific DRAM memory controller for several
workload traces. Our evaluation shows that the MARL formulation consistently
outperforms single-agent RL baselines such as Proximal Policy Optimization and
Soft Actor-Critic over different target objectives such as low power and
latency. To this end, this work opens the pathway for new and promising
research in MARL solutions for hardware architecture search.
Related papers
- AgentSquare: Automatic LLM Agent Search in Modular Design Space [16.659969168343082]
Large Language Models (LLMs) have led to a rapid growth of agentic systems capable of handling a wide range of complex tasks.
We introduce a new research problem: Modularized LLM Agent Search (MoLAS)
arXiv Detail & Related papers (2024-10-08T15:52:42Z) - Towards Human-Level Understanding of Complex Process Engineering Schematics: A Pedagogical, Introspective Multi-Agent Framework for Open-Domain Question Answering [0.0]
In the chemical and process industries, Process Flow Diagrams (PFDs) and Piping and Instrumentation Diagrams (P&IDs) are critical for design, construction, and maintenance.
Recent advancements in Generative AI have shown promise in understanding and interpreting process diagrams for Visual Question Answering (VQA)
We propose a secure, on-premises enterprise solution using a hierarchical, multi-agent Retrieval Augmented Generation (RAG) framework.
arXiv Detail & Related papers (2024-08-24T19:34:04Z) - Real-Time Image Segmentation via Hybrid Convolutional-Transformer Architecture Search [49.81353382211113]
We address the challenge of integrating multi-head self-attention into high resolution representation CNNs efficiently.
We develop a multi-target multi-branch supernet method, which fully utilizes the advantages of high-resolution features.
We present a series of model via Hybrid Convolutional-Transformer Architecture Search (HyCTAS) method that searched for the best hybrid combination of light-weight convolution layers and memory-efficient self-attention layers.
arXiv Detail & Related papers (2024-03-15T15:47:54Z) - ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL [80.10358123795946]
We develop a framework for building multi-turn RL algorithms for fine-tuning large language models.
Our framework adopts a hierarchical RL approach and runs two RL algorithms in parallel.
Empirically, we find that ArCHer significantly improves efficiency and performance on agent tasks.
arXiv Detail & Related papers (2024-02-29T18:45:56Z) - ArchGym: An Open-Source Gymnasium for Machine Learning Assisted
Architecture Design [52.57999109204569]
ArchGym is an open-source framework that connects diverse search algorithms to architecture simulators.
We evaluate ArchGym across multiple vanilla and domain-specific search algorithms in designing custom memory controller, deep neural network accelerators, and custom SOC for AR/VR workloads.
arXiv Detail & Related papers (2023-06-15T06:41:23Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - A Unified and Efficient Coordinating Framework for Autonomous DBMS
Tuning [34.85351481228439]
We propose a unified coordinating framework to efficiently utilize existing ML-based agents.
We show that it can effectively utilize different ML-based agents and find better configurations with 1.414.1X speedups on the workload execution time.
arXiv Detail & Related papers (2023-03-10T05:27:23Z) - Learning From Good Trajectories in Offline Multi-Agent Reinforcement
Learning [98.07495732562654]
offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets.
One agent learned by offline MARL often inherits this random policy, jeopardizing the performance of the entire team.
We propose a novel framework called Shared Individual Trajectories (SIT) to address this problem.
arXiv Detail & Related papers (2022-11-28T18:11:26Z) - Learning Efficient Multi-Agent Cooperative Visual Exploration [18.42493808094464]
We consider the task of visual indoor exploration with multiple agents, where the agents need to cooperatively explore the entire indoor region using as few steps as possible.
We extend the state-of-the-art single-agent RL solution, Active Neural SLAM (ANS), to the multi-agent setting by introducing a novel RL-based global-goal planner, Spatial Coordination Planner ( SCP)
SCP leverages spatial information from each individual agent in an end-to-end manner and effectively guides the agents to navigate towards different spatial goals with high exploration efficiency.
arXiv Detail & Related papers (2021-10-12T04:48:10Z) - Integrating Distributed Architectures in Highly Modular RL Libraries [4.297070083645049]
Most popular reinforcement learning libraries advocate for highly modular agent composability.
We propose a versatile approach that allows the definition of RL agents at different scales through independent reusable components.
arXiv Detail & Related papers (2020-07-06T10:22:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.