A Data-Centric Optimization Framework for Machine Learning
- URL: http://arxiv.org/abs/2110.10802v1
- Date: Wed, 20 Oct 2021 22:07:40 GMT
- Title: A Data-Centric Optimization Framework for Machine Learning
- Authors: Oliver Rausch, Tal Ben-Nun, Nikoli Dryden, Andrei Ivanov, Shigang Li,
Torsten Hoefler
- Abstract summary: We empower deep learning researchers by defining a flexible and user-customizable pipeline for training arbitrary deep neural networks.
The pipeline begins with standard networks in PyTorch or ONNX and transforms through progressive lowering.
We demonstrate competitive performance or speedups on ten different networks, with interactive optimizations discovering new opportunities in EfficientNet.
- Score: 9.57755812904772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rapid progress in deep learning is leading to a diverse set of quickly
changing models, with a dramatically growing demand for compute. However, as
frameworks specialize optimization to patterns in popular networks, they
implicitly constrain novel and diverse models that drive progress in research.
We empower deep learning researchers by defining a flexible and
user-customizable pipeline for optimizing training of arbitrary deep neural
networks, based on data movement minimization. The pipeline begins with
standard networks in PyTorch or ONNX and transforms computation through
progressive lowering. We define four levels of general-purpose transformations,
from local intra-operator optimizations to global data movement reduction.
These operate on a data-centric graph intermediate representation that
expresses computation and data movement at all levels of abstraction, including
expanding basic operators such as convolutions to their underlying
computations. Central to the design is the interactive and introspectable
nature of the pipeline. Every part is extensible through a Python API, and can
be tuned interactively using a GUI. We demonstrate competitive performance or
speedups on ten different networks, with interactive optimizations discovering
new opportunities in EfficientNet.
Related papers
- PDSketch: Integrated Planning Domain Programming and Learning [86.07442931141637]
We present a new domain definition language, named PDSketch.
It allows users to flexibly define high-level structures in the transition models.
Details of the transition model will be filled in by trainable neural networks.
arXiv Detail & Related papers (2023-03-09T18:54:12Z) - Geometric Deep Learning for Autonomous Driving: Unlocking the Power of
Graph Neural Networks With CommonRoad-Geometric [6.638385593789309]
Heterogeneous graphs offer powerful data representations for traffic, given their ability to model the complex interaction effects.
With the advent of graph neural networks (GNNs) as the accompanying deep learning framework, the graph structure can be efficiently leveraged for various machine learning applications.
Our proposed Python framework offers an easy-to-use and fully customizable data processing pipeline to extract standardized graph datasets from traffic scenarios.
arXiv Detail & Related papers (2023-02-02T17:45:02Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - EvoPruneDeepTL: An Evolutionary Pruning Model for Transfer Learning
based Deep Neural Networks [15.29595828816055]
We propose an evolutionary pruning model for Transfer Learning based Deep Neural Networks.
EvoPruneDeepTL replaces the last fully-connected layers with sparse layers optimized by a genetic algorithm.
Results show the contribution of EvoPruneDeepTL and feature selection to the overall computational efficiency of the network.
arXiv Detail & Related papers (2022-02-08T13:07:55Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Woodpecker-DL: Accelerating Deep Neural Networks via Hardware-Aware
Multifaceted Optimizations [15.659251804042748]
Woodpecker-DL (WPK) is a hardware-aware deep learning framework.
WPK uses graph optimization, automated searches, domain-specific language ( DSL) and system-level exploration to accelerate inference.
We show that on a maximum P100 GPU, we can achieve the speedup of 5.40 over cuDNN and 1.63 over TVM on individual operators, and run up to 1.18 times faster than TeslaRT for end-to-end model inference.
arXiv Detail & Related papers (2020-08-11T07:50:34Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.