Moses: Efficient Exploitation of Cross-device Transferable Features for
Tensor Program Optimization
- URL: http://arxiv.org/abs/2201.05752v1
- Date: Sat, 15 Jan 2022 03:55:52 GMT
- Title: Moses: Efficient Exploitation of Cross-device Transferable Features for
Tensor Program Optimization
- Authors: Zhihe Zhao, Xian Shuai, Yang Bai, Neiwen Ling, Nan Guan, Zhenyu Yan,
Guoliang Xing
- Abstract summary: We propose Moses, a simple and efficient design based on the lottery ticket hypothesis.
Compared with state-of-the-art approaches, Moses achieves up to 1.53X efficiency gain in the search stage.
- Score: 10.115260534967645
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Achieving efficient execution of machine learning models has attracted
significant attention recently. To generate tensor programs efficiently, a key
component of DNN compilers is the cost model that can predict the performance
of each configuration on specific devices. However, due to the rapid emergence
of hardware platforms, it is increasingly labor-intensive to train
domain-specific predictors for every new platform. Besides, current design of
cost models cannot provide transferable features between different hardware
accelerators efficiently and effectively. In this paper, we propose Moses, a
simple and efficient design based on the lottery ticket hypothesis, which fully
takes advantage of the features transferable to the target device via domain
adaptation. Compared with state-of-the-art approaches, Moses achieves up to
1.53X efficiency gain in the search stage and 1.41X inference speedup on
challenging DNN benchmarks.
Related papers
- COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning [6.884173899890476]
COGNATE is a novel framework that leverages inexpensive data samples from general-purpose hardware to train cost models.<n>We demonstrate that COGNATE outperforms existing techniques, achieving average speedups of 1.47x (up to 5.46x) for SpMM and 1.39x (up to 4.22x) for SDDMM.
arXiv Detail & Related papers (2025-05-31T06:59:55Z) - MobileMamba: Lightweight Multi-Receptive Visual Mamba Network [51.33486891724516]
Previous research on lightweight models has primarily focused on CNNs and Transformer-based designs.
We propose the MobileMamba framework, which balances efficiency and performance.
MobileMamba achieves up to 83.6% on Top-1, surpassing existing state-of-the-art methods.
arXiv Detail & Related papers (2024-11-24T18:01:05Z) - EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference [49.94169109038806]
This paper introduces EPS-MoE, a novel expert pipeline scheduler for MoE.
Our results demonstrate an average 21% improvement in prefill throughput over existing parallel inference methods.
arXiv Detail & Related papers (2024-10-16T05:17:49Z) - An Efficient Real-Time Object Detection Framework on Resource-Constricted Hardware Devices via Software and Hardware Co-design [11.857890662690448]
This paper proposes an efficient real-time object detection framework on resource-constrained hardware devices through hardware and software co-design.
Results show that the proposed method can significantly reduce the model size and improve the execution time.
arXiv Detail & Related papers (2024-08-02T18:47:11Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via
Dynamic Device Placement [19.639936387834677]
Mixture-of-Experts (MoEs) are becoming more popular and have demonstrated impressive pretraining scalability in various downstream tasks.
MoEs are becoming a new data analytics paradigm in the data life cycle and suffering from unique challenges at scales, complexities, and granularities never before possible.
In this paper, we propose a novel DNN training framework, FlexMoE, which systematically and transparently address the inefficiency caused by dynamic dataflow.
arXiv Detail & Related papers (2023-04-08T07:34:26Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - HAPI: Hardware-Aware Progressive Inference [18.214367595727037]
Convolutional neural networks (CNNs) have recently become the state-of-the-art in a diversity of AI tasks.
Despite their popularity, CNN inference still comes at a high computational cost.
This work presents HAPI, a novel methodology for generating high-performance early-exit networks.
arXiv Detail & Related papers (2020-08-10T09:55:18Z) - Towards High Performance, Portability, and Productivity: Lightweight
Augmented Neural Networks for Performance Prediction [0.0]
We propose lightweight augmented neural networks for arbitrary combinations of kernel-variant- hardware.
We are able to obtain a low MAPE of 3%, significantly outperforming traditional feed-forward neural networks.
Our variant-selection approach can be used in Halide implementations to obtain up to 1.7x speedup over Halide's auto-scheduler.
arXiv Detail & Related papers (2020-03-17T02:19:54Z) - MNN: A Universal and Efficient Inference Engine [6.830174586230231]
Mobile Neural Network (MNN) is a universal and efficient inference engine tailored to mobile applications.
The contributions of MNN include: (1) presenting a mechanism called pre-inference that manages to conduct runtime optimization; (2)deliveringthorough kernel optimization on operators to achieve optimal performance; (3) introducing backend abstraction module which enables hybrid scheduling and keeps the engine lightweight.
arXiv Detail & Related papers (2020-02-27T20:03:16Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.