Track-Assignment Detailed Routing Using Attention-based Policy Model
With Supervision
- URL: http://arxiv.org/abs/2010.13702v1
- Date: Mon, 26 Oct 2020 16:40:11 GMT
- Title: Track-Assignment Detailed Routing Using Attention-based Policy Model
With Supervision
- Authors: Haiguang Liao, Qingyi Dong, Weiyi Qi, Elias Fallon, Levent Burak Kara
- Abstract summary: We propose a machine learning driven method for solving the track-assignment detailed routing problem.
Our approach adopts an attention-based reinforcement learning (RL) policy model.
We show that especially for complex problems, our supervised RL method provides good quality solution.
- Score: 0.27998963147546135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detailed routing is one of the most critical steps in analog circuit design.
Complete routing has become increasingly more challenging in advanced node
analog circuits, making advances in efficient automatic routers ever more
necessary. In this work, we propose a machine learning driven method for
solving the track-assignment detailed routing problem for advanced node analog
circuits. Our approach adopts an attention-based reinforcement learning (RL)
policy model. Our main insight and advancement over this RL model is the use of
supervision as a way to leverage solutions generated by a conventional genetic
algorithm (GA). For this, our approach minimizes the Kullback-Leibler
divergence loss between the output from the RL policy model and a solution
distribution obtained from the genetic solver. The key advantage of this
approach is that the router can learn a policy in an offline setting with
supervision, while improving the run-time performance nearly 100x over the
genetic solver. Moreover, the quality of the solutions our approach produces
matches well with those generated by GA. We show that especially for complex
problems, our supervised RL method provides good quality solution similar to
conventional attention-based RL without comprising run time performance. The
ability to learn from example designs and train the router to get similar
solutions with orders of magnitude run-time improvement can impact the design
flow dramatically, potentially enabling increased design exploration and
routability-driven placement.
Related papers
- Intelligent Routing Algorithm over SDN: Reusable Reinforcement Learning Approach [1.799933345199395]
We develop a reusable RL-aware, reusable routing algorithm, RLSR-Routing over SDN.
Our algorithm shows better performance in terms of load balancing than the traditional approaches.
It also has faster convergence than the non-reusable RL approach when finding paths for multiple traffic demands.
arXiv Detail & Related papers (2024-09-23T17:15:24Z) - Multiobjective Vehicle Routing Optimization with Time Windows: A Hybrid Approach Using Deep Reinforcement Learning and NSGA-II [52.083337333478674]
This paper proposes a weight-aware deep reinforcement learning (WADRL) approach designed to address the multiobjective vehicle routing problem with time windows (MOVRPTW)
The Non-dominated sorting genetic algorithm-II (NSGA-II) method is then employed to optimize the outcomes produced by the WADRL.
arXiv Detail & Related papers (2024-07-18T02:46:06Z) - An Efficient Learning-based Solver Comparable to Metaheuristics for the
Capacitated Arc Routing Problem [67.92544792239086]
We introduce an NN-based solver to significantly narrow the gap with advanced metaheuristics.
First, we propose direction-aware facilitating attention model (DaAM) to incorporate directionality into the embedding process.
Second, we design a supervised reinforcement learning scheme that involves supervised pre-training to establish a robust initial policy.
arXiv Detail & Related papers (2024-03-11T02:17:42Z) - Partial End-to-end Reinforcement Learning for Robustness Against Modelling Error in Autonomous Racing [0.0]
This paper addresses the issue of increasing the performance of reinforcement learning (RL) solutions for autonomous racing cars.
We propose a partial end-to-end algorithm that decouples the planning and control tasks.
By leveraging the robustness of a classical controller, our partial end-to-end driving algorithm exhibits better robustness towards model mismatches than standard end-to-end algorithms.
arXiv Detail & Related papers (2023-12-11T14:27:10Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Fidelity-Guarantee Entanglement Routing in Quantum Networks [64.49733801962198]
Entanglement routing establishes remote entanglement connection between two arbitrary nodes.
We propose purification-enabled entanglement routing designs to provide fidelity guarantee for multiple Source-Destination (SD) pairs in quantum networks.
arXiv Detail & Related papers (2021-11-15T14:07:22Z) - Ranking Cost: Building An Efficient and Scalable Circuit Routing Planner
with Evolution-Based Optimization [49.207538634692916]
We propose a new algorithm for circuit routing, named Ranking Cost, to form an efficient and trainable router.
In our method, we introduce a new set of variables called cost maps, which can help the A* router to find out proper paths.
Our algorithm is trained in an end-to-end manner and does not use any artificial data or human demonstration.
arXiv Detail & Related papers (2021-10-08T07:22:45Z) - A Heuristically Assisted Deep Reinforcement Learning Approach for
Network Slice Placement [0.7885276250519428]
We introduce a hybrid placement solution based on Deep Reinforcement Learning (DRL) and a dedicated optimization based on the Power of Two Choices principle.
The proposed Heuristically-Assisted DRL (HA-DRL) allows to accelerate the learning process and gain in resource usage when compared against other state-of-the-art approaches.
arXiv Detail & Related papers (2021-05-14T10:04:17Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z) - Attention Routing: track-assignment detailed routing using
attention-based reinforcement learning [0.23453441553817037]
We propose a new router: attention router, which is the first attempt to solve the track-assignment detailed routing problem using reinforcement learning.
The attention router and its baseline genetic router are applied to solve different commercial advanced technologies analog circuits problem sets.
arXiv Detail & Related papers (2020-04-20T17:50:13Z) - Towards Cognitive Routing based on Deep Reinforcement Learning [17.637357380527583]
We propose a definition of cognitive routing and an implementation approach based on Deep Reinforcement Learning (DRL)
To facilitate the research of DRL-based cognitive routing, we introduce a simulator named RL4Net for DRL-based routing algorithm development and simulation.
The simulation results on an example network topology show that the DDPG-based routing algorithm achieves better performance than OSPF and random weight algorithms.
arXiv Detail & Related papers (2020-03-19T03:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.