Safe Automated Refactoring for Efficient Migration of Imperative Deep Learning Programs to Graph Execution
- URL: http://arxiv.org/abs/2504.05424v1
- Date: Mon, 07 Apr 2025 18:48:43 GMT
- Title: Safe Automated Refactoring for Efficient Migration of Imperative Deep Learning Programs to Graph Execution
- Authors: Raffi Khatchadourian, Tatiana Castro Vélez, Mehdi Bagherzadeh, Nan Jia, Anita Raja,
- Abstract summary: We present an automated approach to determine when it is safe and potentially advantageous to migrate imperative DL code to graph execution.<n>The approach is implemented as a PyDev Eclipse plug-in that integrates the WALA Ariadne analysis framework.
- Score: 4.461099699060121
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code -- supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. Though hybrid approaches aim for the "best of both worlds," using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution. We present an automated refactoring approach that assists developers in specifying whether their otherwise eagerly-executed imperative DL code could be reliably and efficiently executed as graphs while preserving semantics. The approach, based on a novel imperative tensor analysis, automatically determines when it is safe and potentially advantageous to migrate imperative DL code to graph execution. The approach is implemented as a PyDev Eclipse IDE plug-in that integrates the WALA Ariadne analysis framework and evaluated on 19 Python projects consisting of 132.05 KLOC. We found that 326 of 766 candidate functions (42.56%) were refactorable, and an average speedup of 2.16 on performance tests was observed. The results indicate that the approach is useful in optimizing imperative DL code to its full potential.
Related papers
- DARS: Dynamic Action Re-Sampling to Enhance Coding Agent Performance by Adaptive Tree Traversal [55.13854171147104]
Large Language Models (LLMs) have revolutionized various domains, including natural language processing, data analysis, and software development.<n>We present Dynamic Action Re-Sampling (DARS), a novel inference time compute scaling approach for coding agents.<n>We evaluate our approach on SWE-Bench Lite benchmark, demonstrating that this scaling strategy achieves a pass@k score of 55% with Claude 3.5 Sonnet V2.
arXiv Detail & Related papers (2025-03-18T14:02:59Z) - Efficient Deep Learning Board: Training Feedback Is Not All You Need [28.910266386748525]
We propose EfficientDL, an innovative deep learning board for automatic performance prediction and component recommendation.
The magic of no training feedback comes from our proposed comprehensive, multi-dimensional, fine-grained system component dataset.
For example, EfficientDL operates seamlessly with mainstream models such as ResNet50, MobileNetV3, EfficientNet-B0, MaxViT-T, Swin-B, and DaViT-T.
arXiv Detail & Related papers (2024-10-17T14:43:34Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Towards Safe Automated Refactoring of Imperative Deep Learning Programs
to Graph Execution [4.786072763033669]
More natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance.
We present our ongoing work on an automated approach that assists developers in specifying whether and how their otherwise imperative DL code could be reliably and efficiently executed as graphs.
The approach is being implemented as a PyDev Eclipse plug-in and uses the WALA Ariadne analysis framework.
arXiv Detail & Related papers (2023-08-22T20:50:19Z) - Improving Dual-Encoder Training through Dynamic Indexes for Negative
Mining [61.09807522366773]
We introduce an algorithm that approximates the softmax with provable bounds and that dynamically maintains the tree.
In our study on datasets with over twenty million targets, our approach cuts error by half in relation to oracle brute-force negative mining.
arXiv Detail & Related papers (2023-03-27T15:18:32Z) - Slapo: A Schedule Language for Progressive Optimization of Large Deep
Learning Model Training [17.556432199389615]
Slapo is a schedule language that decouples the execution of a tensor-level operator from its arithmetic definition.
We show that Slapo can improve training throughput by up to 2.92x on a single machine with 8 NVIDIA V100 GPUs.
arXiv Detail & Related papers (2023-02-16T00:34:53Z) - Challenges in Migrating Imperative Deep Learning Programs to Graph
Execution: An Empirical Study [4.415977307120617]
We conduct a data-driven analysis of challenges -- and resultant bugs -- involved in writing reliable yet performant imperative DL code.
We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code.
arXiv Detail & Related papers (2022-01-24T21:12:38Z) - Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning
Programs [7.656446581986389]
Imperative programming allows users to implement their deep neural networks (DNNs) easily.
Several systems have been proposed to combine the usability of imperative programming with the optimized performance of symbolic graph execution.
We propose Terra, an imperative-symbolic co-execution system that can handle any imperative DL programs while achieving the optimized performance of symbolic graph execution.
arXiv Detail & Related papers (2022-01-23T09:04:48Z) - Highly Parallel Autoregressive Entity Linking with Discriminative
Correction [51.947280241185]
We propose a very efficient approach that parallelizes autoregressive linking across all potential mentions.
Our model is >70 times faster and more accurate than the previous generative method.
arXiv Detail & Related papers (2021-09-08T17:28:26Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives [55.79741270235602]
We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
arXiv Detail & Related papers (2020-06-02T06:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.