A Framework to Enable Algorithmic Design Choice Exploration in DNNs
- URL: http://arxiv.org/abs/2410.08300v1
- Date: Thu, 10 Oct 2024 18:41:56 GMT
- Title: A Framework to Enable Algorithmic Design Choice Exploration in DNNs
- Authors: Timothy L. Cronin IV, Sanmukh Kuppannagari,
- Abstract summary: We introduce an open source framework which provides easy to use fine grain algorithmic control for deep learning networks (DNNs)
The framework enables users to implement and select their own algorithms to be utilized by the DNN.
The framework incurs no additional performance overhead, meaning that performance depends solely on the algorithms chosen by the user.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning technologies, particularly deep neural networks (DNNs), have demonstrated significant success across many domains. This success has been accompanied by substantial advancements and innovations in the algorithms behind the operations required by DNNs. These enhanced algorithms hold the potential to greatly increase the performance of DNNs. However, discovering the best performing algorithm for a DNN and altering the DNN to use such algorithm is a difficult and time consuming task. To address this, we introduce an open source framework which provides easy to use fine grain algorithmic control for DNNs, enabling algorithmic exploration and selection. Along with built-in high performance implementations of common deep learning operations, the framework enables users to implement and select their own algorithms to be utilized by the DNN. The framework's built-in accelerated implementations are shown to yield outputs equivalent to and exhibit similar performance as implementations in PyTorch, a popular DNN framework. Moreover, the framework incurs no additional performance overhead, meaning that performance depends solely on the algorithms chosen by the user.
Related papers
- Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications [23.502136316777058]
Spiking Neural Networks (SNNs) have emerged as a promising substitute for Artificial Neural Networks (ANNs)
Existing supervised learning algorithms for SNNs require significantly more memory and time than their ANN counterparts.
Our approach directly converts pre-trained ANN models into high-performance SNNs without additional training.
arXiv Detail & Related papers (2024-09-05T09:14:44Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Open the box of digital neuromorphic processor: Towards effective
algorithm-hardware co-design [0.08431877864777441]
We present a practical approach to enable algorithm designers to accurately benchmark SNN algorithms.
We show the energy efficiency of SNN algorithms for video processing and online learning.
arXiv Detail & Related papers (2023-03-27T14:03:11Z) - Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! [100.19080749267316]
"Sparsity May Cry" Benchmark (SMC-Bench) is a collection of carefully-curated 4 diverse tasks with 10 datasets.
SMC-Bench is designed to favor and encourage the development of more scalable and generalizable sparse algorithms.
arXiv Detail & Related papers (2023-03-03T18:47:21Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - CoSA: Scheduling by Constrained Optimization for Spatial Accelerators [1.9149970150912705]
We present CoSA, a constrained-optimization-based approach for scheduling Deep Neural Networks (DNNs) accelerators.
As opposed to existing approaches that either rely on designers's or iterative methods to navigate the search space, CoSA expresses scheduling decisions as a constrained-optimization problem.
We demonstrate that CoSA-generated schedules significantly outperform state-of-the-art approaches by a geometric mean of up to 2.5x.
arXiv Detail & Related papers (2021-05-05T07:17:25Z) - Designing Interpretable Approximations to Deep Reinforcement Learning [14.007731268271902]
Deep neural networks (DNNs) set the bar for algorithm performance.
It may not be feasible to actually use such high-performing DNNs in practice.
This work seeks to identify reduced models that not only preserve a desired performance level, but also, for example, succinctly explain the latent knowledge represented by a DNN.
arXiv Detail & Related papers (2020-10-28T06:33:09Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z) - A Supervised Learning Algorithm for Multilayer Spiking Neural Networks
Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design [2.6872737601772956]
Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes.
We propose a novel supervised learning algorithm for SNNs based on temporal coding.
arXiv Detail & Related papers (2020-01-08T03:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.