AxOMaP: Designing FPGA-based Approximate Arithmetic Operators using
Mathematical Programming
- URL: http://arxiv.org/abs/2309.13445v1
- Date: Sat, 23 Sep 2023 18:23:54 GMT
- Title: AxOMaP: Designing FPGA-based Approximate Arithmetic Operators using
Mathematical Programming
- Authors: Siva Satyendra Sahoo and Salim Ullah and Akash Kumar
- Abstract summary: We propose a data analysis-driven mathematical programming-based approach to synthesizing approximate operators for FPGAs.
Specifically, we formulate mixed integer quadratically constrained programs based on the results of correlation analysis of the characterization data.
Compared to traditional evolutionary algorithms-based optimization, we report up to 21% improvement in the hypervolume, for joint optimization of PPA and BEHAV.
- Score: 2.898055875927704
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the increasing application of machine learning (ML) algorithms in
embedded systems, there is a rising necessity to design low-cost computer
arithmetic for these resource-constrained systems. As a result, emerging models
of computation, such as approximate and stochastic computing, that leverage the
inherent error-resilience of such algorithms are being actively explored for
implementing ML inference on resource-constrained systems. Approximate
computing (AxC) aims to provide disproportionate gains in the power,
performance, and area (PPA) of an application by allowing some level of
reduction in its behavioral accuracy (BEHAV). Using approximate operators
(AxOs) for computer arithmetic forms one of the more prevalent methods of
implementing AxC. AxOs provide the additional scope for finer granularity of
optimization, compared to only precision scaling of computer arithmetic. To
this end, designing platform-specific and cost-efficient approximate operators
forms an important research goal. Recently, multiple works have reported using
AI/ML-based approaches for synthesizing novel FPGA-based AxOs. However, most of
such works limit usage of AI/ML to designing ML-based surrogate functions used
during iterative optimization processes. To this end, we propose a novel data
analysis-driven mathematical programming-based approach to synthesizing
approximate operators for FPGAs. Specifically, we formulate mixed integer
quadratically constrained programs based on the results of correlation analysis
of the characterization data and use the solutions to enable a more directed
search approach for evolutionary optimization algorithms. Compared to
traditional evolutionary algorithms-based optimization, we report up to 21%
improvement in the hypervolume, for joint optimization of PPA and BEHAV, in the
design of signed 8-bit multipliers.
Related papers
- Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - AxOCS: Scaling FPGA-based Approximate Operators using Configuration
Supersampling [2.578571429830403]
We propose AxOCS, a methodology for designing approximate arithmetic operators through ML-based supersampling.
We present a method to leverage the correlation of PPA and BEHAV metrics across operators of varying bit-widths for generating larger bit-width operators.
The experimental evaluation of AxOCS for FPGA-optimized approximate operators shows that the proposed approach significantly improves the quality-resulting hypervolume.
arXiv Detail & Related papers (2023-09-22T12:36:40Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Algorithmic Foundations of Empirical X-risk Minimization [51.58884973792057]
This manuscript introduces a new optimization framework machine learning and AI, named bf empirical X-risk baseline (EXM).
X-risk is a term introduced to represent a family of compositional measures or objectives.
arXiv Detail & Related papers (2022-06-01T12:22:56Z) - Hyperparameter optimization of data-driven AI models on HPC systems [0.0]
This work is part of RAISE's work on data-driven use cases which leverages AI- and HPC cross-methods.
It is shown that in the case of Machine-Learned Particle reconstruction in High Energy Physics, the ASHA algorithm in combination with Bayesian optimization gives the largest performance increase per compute resources spent out of the investigated algorithms.
arXiv Detail & Related papers (2022-03-02T14:02:59Z) - Evolutionary Algorithms in Approximate Computing: A Survey [0.0]
This paper deals with evolutionary approximation as one of the popular approximation methods.
The paper provides the first survey of evolutionary approximation (EA)-based approaches applied in the context of approximate computing.
arXiv Detail & Related papers (2021-08-16T10:17:26Z) - Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient
Descent [8.714458129632158]
Kolmogorov model (KM) is an interpretable and predictable representation approach to learning the underlying probabilistic structure of a set of random variables.
We propose a computationally scalable KM learning algorithm, based on the regularized dual optimization combined with enhanced gradient descent (GD) method.
It is shown that the accuracy of logical relation mining for interpretability by using the proposed KM learning algorithm exceeds $80%$.
arXiv Detail & Related papers (2021-07-11T10:33:02Z) - Parallel Scheduling Self-attention Mechanism: Generalization and
Optimization [0.76146285961466]
We propose a general scheduling algorithm, which is derived from the optimum scheduling for small instances solved by a satisfiability checking(SAT) solver.
Strategies for further optimization on skipping redundant computations are put forward as well, with which reductions of almost 25% and 50% of the original computations are respectively achieved.
The proposed algorithms are applicable regardless of problem sizes, as long as the number of input vectors is divisible to the number of computing units available in the architecture.
arXiv Detail & Related papers (2020-12-02T12:04:16Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.