MaskPlace: Fast Chip Placement via Reinforced Visual Representation
Learning
- URL: http://arxiv.org/abs/2211.13382v1
- Date: Thu, 24 Nov 2022 02:22:09 GMT
- Title: MaskPlace: Fast Chip Placement via Reinforced Visual Representation
Learning
- Authors: Yao Lai, Yao Mu, Ping Luo
- Abstract summary: This work presents MaskPlace to automatically generate a valid chip layout design within a few hours.
It recasts placement as a problem of learning pixel-level visual representation to comprehensively describe millions of modules on a chip.
It outperforms recent methods that represent a chip as a hypergraph.
- Score: 18.75057105112443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Placement is an essential task in modern chip design, aiming at placing
millions of circuit modules on a 2D chip canvas. Unlike the human-centric
solution, which requires months of intense effort by hardware engineers to
produce a layout to minimize delay and energy consumption, deep reinforcement
learning has become an emerging autonomous tool. However, the learning-centric
method is still in its early stage, impeded by a massive design space of size
ten to the order of a few thousand. This work presents MaskPlace to
automatically generate a valid chip layout design within a few hours, whose
performance can be superior or comparable to recent advanced approaches. It has
several appealing benefits that prior arts do not have. Firstly, MaskPlace
recasts placement as a problem of learning pixel-level visual representation to
comprehensively describe millions of modules on a chip, enabling placement in a
high-resolution canvas and a large action space. It outperforms recent methods
that represent a chip as a hypergraph. Secondly, it enables training the policy
network by an intuitive reward function with dense reward, rather than a
complicated reward function with sparse reward from previous methods. Thirdly,
extensive experiments on many public benchmarks show that MaskPlace outperforms
existing RL approaches in all key performance metrics, including wirelength,
congestion, and density. For example, it achieves 60%-90% wirelength reduction
and guarantees zero overlaps. We believe MaskPlace can improve AI-assisted chip
layout design. The deliverables are released at
https://laiyao1.github.io/maskplace.
Related papers
- Triple Point Masking [49.39218611030084]
Existing 3D mask learning methods encounter performance bottlenecks under limited data.
We introduce a triple point masking scheme, named TPM, which serves as a scalable framework for pre-training of masked autoencoders.
Extensive experiments show that the four baselines equipped with the proposed TPM achieve comprehensive performance improvements on various downstream tasks.
arXiv Detail & Related papers (2024-09-26T05:33:30Z) - MaDi: Learning to Mask Distractions for Generalization in Visual Deep
Reinforcement Learning [40.7452827298478]
We introduce MaDi, a novel algorithm that learns to mask distractions by the reward signal only.
In MaDi, the conventional actor-critic structure of deep reinforcement learning agents is complemented by a small third sibling, the Masker.
Our algorithm improves the agent's focus with useful masks, while its efficient Masker network only adds 0.2% more parameters to the original structure.
arXiv Detail & Related papers (2023-12-23T20:11:05Z) - Vision Transformer with Super Token Sampling [93.70963123497327]
Vision transformer has achieved impressive performance for many vision tasks.
It may suffer from high redundancy in capturing local features for shallow layers.
Super tokens attempt to provide a semantically meaningful tessellation of visual content.
arXiv Detail & Related papers (2022-11-21T03:48:13Z) - Training Your Sparse Neural Network Better with Any Mask [106.134361318518]
Pruning large neural networks to create high-quality, independently trainable sparse masks is desirable.
In this paper we demonstrate an alternative opportunity: one can customize the sparse training techniques to deviate from the default dense network training protocols.
Our new sparse training recipe is generally applicable to improving training from scratch with various sparse masks.
arXiv Detail & Related papers (2022-06-26T00:37:33Z) - Routing and Placement of Macros using Deep Reinforcement Learning [0.0]
We train a model to place the nodes of a chip netlist onto a chip canvas.
We want to build a neural architecture that will accurately reward the agent across a wide variety of input netlist correctly.
arXiv Detail & Related papers (2022-05-19T02:40:58Z) - Masked Autoencoders Are Scalable Vision Learners [60.97703494764904]
Masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Coupling these two designs enables us to train large models efficiently and effectively.
arXiv Detail & Related papers (2021-11-11T18:46:40Z) - KSM: Fast Multiple Task Adaption via Kernel-wise Soft Mask Learning [49.77278179376902]
Deep Neural Networks (DNN) could forget the knowledge about earlier tasks when learning new tasks, and this is known as textitcatastrophic forgetting.
Recent continual learning methods are capable of alleviating the catastrophic problem on toy-sized datasets.
We propose a new training method called textit- Kernel-wise Soft Mask (KSM), which learns a kernel-wise hybrid binary and real-value soft mask for each task.
arXiv Detail & Related papers (2020-09-11T21:48:39Z) - Chip Placement with Deep Reinforcement Learning [40.952111701288125]
We present a learning-based approach to chip placement.
Unlike prior methods, our approach has the ability to learn from past experience and improve over time.
In under 6 hours, our method can generate placements that are superhuman or comparable on modern accelerator netlists.
arXiv Detail & Related papers (2020-04-22T17:56:07Z) - BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation [103.74690082121079]
In this work, we achieve improved mask prediction by effectively combining instance-level information with semantic information with lower-level fine-granularity.
Our main contribution is a blender module which draws inspiration from both top-down and bottom-up instance segmentation approaches.
BlendMask can effectively predict dense per-pixel position-sensitive instance features with very few channels, and learn attention maps for each instance with merely one convolution layer.
arXiv Detail & Related papers (2020-01-02T03:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.