Tensor Program Optimization with Probabilistic Programs
- URL: http://arxiv.org/abs/2205.13603v1
- Date: Thu, 26 May 2022 20:20:02 GMT
- Title: Tensor Program Optimization with Probabilistic Programs
- Authors: Junru Shao, Xiyou Zhou, Siyuan Feng, Bohan Hou, Ruihang Lai, Hongyi
Jin, Wuwei Lin, Masahiro Masuda, Cody Hao Yu, Tianqi Chen
- Abstract summary: This paper introduces MetaSchedule, a domain-specific probabilistic programming language abstraction to construct a rich search space of tensor programs.
Our abstraction allows domain experts to analyze the program, and easily propose choices in a modular way to compose program transformation.
We also build an end-to-end learning-driven framework to find an optimized program for a given search space.
- Score: 16.303739951257814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic optimization for tensor programs becomes increasingly important as
we deploy deep learning in various environments, and efficient optimization
relies on a rich search space and effective search. Most existing efforts adopt
a search space which lacks the ability to efficiently enable domain experts to
grow the search space. This paper introduces MetaSchedule, a domain-specific
probabilistic programming language abstraction to construct a rich search space
of tensor programs. Our abstraction allows domain experts to analyze the
program, and easily propose stochastic choices in a modular way to compose
program transformation accordingly. We also build an end-to-end learning-driven
framework to find an optimized program for a given search space. Experimental
results show that MetaSchedule can cover the search space used in the
state-of-the-art tensor program optimization frameworks in a modular way.
Additionally, it empowers domain experts to conveniently grow the search space
and modularly enhance the system, which brings 48% speedup on end-to-end deep
learning workloads.
Related papers
- Dynamic operator management in meta-heuristics using reinforcement learning: an application to permutation flowshop scheduling problems [0.3495246564946556]
This study develops a framework based on reinforcement learning to dynamically manage a large portfolio of search operators within meta-heuristics.
A Q-learning-based adaptive operator selection mechanism is used to select the most suitable operator from the dynamically updated portfolio.
The performance of the proposed framework is analyzed through an application to the permutation flowshop scheduling problem.
arXiv Detail & Related papers (2024-08-27T08:38:17Z) - Can LLMs Configure Software Tools [0.76146285961466]
In software engineering, the meticulous configuration of software tools is crucial in ensuring optimal performance within intricate systems.
In this study, we embark on an exploration of leveraging Large-Language Models (LLMs) to streamline the software configuration process.
Our work presents a novel approach that employs LLMs, such as Chat-GPT, to identify starting conditions and narrow down the search space, improving configuration efficiency.
arXiv Detail & Related papers (2023-12-11T05:03:02Z) - Bayesian Program Learning by Decompiling Amortized Knowledge [50.960612835957875]
We present a novel approach for library learning that directly leverages the neural search policy, effectively "decompiling" its amortized knowledge to extract relevant program components.
This provides stronger amortized inference: the amortized knowledge learnt to reduce search breadth is now also used to reduce search depth.
arXiv Detail & Related papers (2023-06-13T15:35:01Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - CrossBeam: Learning to Search in Bottom-Up Program Synthesis [51.37514793318815]
We propose training a neural model to learn a hands-on search policy for bottom-up synthesis.
Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs.
We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
arXiv Detail & Related papers (2022-03-20T04:41:05Z) - Learning to Superoptimize Real-world Programs [79.4140991035247]
We propose a framework to learn to superoptimize real-world programs by using neural sequence-to-sequence models.
We introduce the Big Assembly benchmark, a dataset consisting of over 25K real-world functions mined from open-source projects in x86-64 assembly.
arXiv Detail & Related papers (2021-09-28T05:33:21Z) - Learning Space Partitions for Path Planning [54.475949279050596]
PlaLaM outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima.
These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 245% and in molecular design by up to 0.4 on properties on a 0-1 scale.
arXiv Detail & Related papers (2021-06-19T18:06:11Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z) - Learning Heuristic Selection with Dynamic Algorithm Configuration [44.91083687014879]
We show that dynamic algorithm configuration can be used for dynamic selection dynamics of a planning system.
We show that this approach generalizes over existing approaches and that it can exponentially improve performance of the domain search.
arXiv Detail & Related papers (2020-06-15T09:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.