HENNC: Hardware Engine for Artificial Neural Network-based Chaotic Oscillators
- URL: http://arxiv.org/abs/2407.19165v1
- Date: Sat, 27 Jul 2024 04:17:38 GMT
- Title: HENNC: Hardware Engine for Artificial Neural Network-based Chaotic Oscillators
- Authors: Mobin Vaziri, Shervin Vakili, M. Mehdi Rahimifar, J. M. Pierre Langlois,
- Abstract summary: The framework trains a model to approximate a chaotic system, then performs design space exploration yielding potential hardware architectures.
The framework then generates the corresponding synthesizable High-Level Synthesis code and a validation testbench from a selected solution.
The proposed framework offers a rapid hardware design process of candidate architectures superior to manually designed works in terms of hardware cost and throughput.
- Score: 0.26999000177990923
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This letter introduces a framework for the automatic generation of hardware cores for Artificial Neural Network (ANN)-based chaotic oscillators. The framework trains the model to approximate a chaotic system, then performs design space exploration yielding potential hardware architectures for its implementation. The framework then generates the corresponding synthesizable High-Level Synthesis code and a validation testbench from a selected solution. The hardware design primarily targets FPGAs. The proposed framework offers a rapid hardware design process of candidate architectures superior to manually designed works in terms of hardware cost and throughput. The source code is available on GitHub.
Related papers
- NLS: Natural-Level Synthesis for Hardware Implementation Through GenAI [41.03569272854125]
This paper introduces Natural-Level Synthesis, an innovative approach for generating hardware using generative artificial intelligence on both the system level and component-level.
With NLS, engineers can participate more deeply in the development, synthesis, and test stages by using Gen-AI models to convert natural language descriptions directly into Hardware Description Language code.
We developed the NLS tool to facilitate natural language-driven HDL synthesis, enabling rapid generation of system-level HDL designs while significantly reducing development complexity.
arXiv Detail & Related papers (2025-03-28T15:46:01Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers [56.37495946212932]
Vision transformers (ViTs) have demonstrated their superior accuracy for computer vision tasks compared to convolutional neural networks (CNNs)
This work proposes Quasar-ViT, a hardware-oriented quantization-aware architecture search framework for ViTs.
arXiv Detail & Related papers (2024-07-25T16:35:46Z) - SynthAI: A Multi Agent Generative AI Framework for Automated Modular HLS Design Generation [0.0]
We introduce SynthAI, a new method for the automated creation of High-Level Synthesis (HLS) designs.
SynthAI integrates ReAct agents, Chain-of-Thought (CoT) prompting, web search technologies, and the Retrieval-Augmented Generation framework.
arXiv Detail & Related papers (2024-05-25T05:45:55Z) - AutoHLS: Learning to Accelerate Design Space Exploration for HLS Designs [10.690389829735661]
This paper proposes a novel framework called AutoHLS, which integrates a deep neural network (DNN) with Bayesian optimization (BO) to accelerate HLS hardware design optimization.
Our experimental results demonstrate up to a 70-fold speedup in exploration time.
arXiv Detail & Related papers (2024-03-15T21:14:44Z) - Using the Abstract Computer Architecture Description Language to Model
AI Hardware Accelerators [77.89070422157178]
Manufacturers of AI-integrated products face a critical challenge: selecting an accelerator that aligns with their product's performance requirements.
The Abstract Computer Architecture Description Language (ACADL) is a concise formalization of computer architecture block diagrams.
In this paper, we demonstrate how to use the ACADL to model AI hardware accelerators, use their ACADL description to map DNNs onto them, and explain the timing simulation semantics to gather performance results.
arXiv Detail & Related papers (2024-01-30T19:27:16Z) - Neural Markov Prolog [57.13568543360899]
We propose the language Neural Markov Prolog (NMP) as a means to bridge first order logic and neural network design.
NMP allows for the easy generation and presentation of architectures for images, text, relational databases, or other target data types.
arXiv Detail & Related papers (2023-11-27T21:41:47Z) - NAS-NeRF: Generative Neural Architecture Search for Neural Radiance
Fields [75.28756910744447]
Neural radiance fields (NeRFs) enable high-quality novel view synthesis, but their high computational complexity limits deployability.
We introduce NAS-NeRF, a generative neural architecture search strategy that generates compact, scene-specialized NeRF architectures.
Our method incorporates constraints on target metrics and budgets to guide the search towards architectures tailored for each scene.
arXiv Detail & Related papers (2023-09-25T17:04:30Z) - CktGNN: Circuit Graph Neural Network for Electronic Design Automation [67.29634073660239]
This paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing.
We introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains $10$K distinct operational amplifiers.
Our work paves the way toward a learning-based open-sourced design automation for analog circuits.
arXiv Detail & Related papers (2023-08-31T02:20:25Z) - End-to-end codesign of Hessian-aware quantized neural networks for FPGAs
and ASICs [49.358119307844035]
We develop an end-to-end workflow for the training and implementation of co-designed neural networks (NNs)
This makes efficient NN implementations in hardware accessible to nonexperts, in a single open-sourced workflow.
We demonstrate the workflow in a particle physics application involving trigger decisions that must operate at the 40 MHz collision rate of the Large Hadron Collider (LHC)
We implement an optimized mixed-precision NN for high-momentum particle jets in simulated LHC proton-proton collisions.
arXiv Detail & Related papers (2023-04-13T18:00:01Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Algorithm and Hardware Co-design for Reconfigurable CNN Accelerator [3.1431240233552007]
Recent advances in algorithm-hardware co-design for deep neural networks (DNNs) have demonstrated their potential in automatically designing neural architectures and hardware designs.
However, it is still a challenging optimization problem due to the expensive training cost and the time-consuming hardware implementation.
We propose a novel three-phase co-design framework, with the following new features.
Our found network and hardware configuration can achieve 2% 6% higher accuracy, 2x 26x smaller latency and 8.5x higher energy efficiency.
arXiv Detail & Related papers (2021-11-24T20:37:50Z) - DFSynthesizer: Dataflow-based Synthesis of Spiking Neural Networks to
Neuromorphic Hardware [4.273223677453178]
Spiking Neural Networks (SNN) are an emerging computation model, which uses event-driven activation and bio-inspired learning algorithms.
DF Synthesizer is an end-to-end framework for synthesizing SNN-based machine learning programs to neuromorphic hardware.
arXiv Detail & Related papers (2021-08-04T12:49:37Z) - CNN2Gate: Toward Designing a General Framework for Implementation of
Convolutional Neural Networks on FPGA [0.3655021726150368]
This paper introduces an integrated framework that supports compilation of a CNN model for an FPGA target.
CNN2Gate exploits the OpenCL synthesis workflow for FPGAs offered by commercial vendors.
This paper reports results of automatic synthesis and design-space exploration of AlexNet and VGG-16 on various Intel FPGA platforms.
arXiv Detail & Related papers (2020-04-06T01:57:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.