A Modular 1D-CNN Architecture for Real-time Digital Pre-distortion
- URL: http://arxiv.org/abs/2111.09637v1
- Date: Thu, 18 Nov 2021 11:30:23 GMT
- Title: A Modular 1D-CNN Architecture for Real-time Digital Pre-distortion
- Authors: Udara De Silva (1), Toshiaki Koike-Akino (1), Rui Ma (1), Ao Yamashita
(2), Hideyuki Nakamizo (2) ((1) Mitsubishi Electric Research Labs, Cambridge,
MA, USA, (2) Mitsubishi Electric Corporation, Information Tech. R&D Center,
Kanagawa, Japan)
- Abstract summary: This study reports a novel hardware-friendly modular architecture for implementing one dimensional convolutional neural network (1D-CNN) digital predistortion (DPD) technique to linearize RF power amplifier (PA) real-time.
The experimental results with 100 MHz signals show that the proposed 1D-CNN obtains superior performance compared with other neural network architectures for real-time DPD application.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study reports a novel hardware-friendly modular architecture for
implementing one dimensional convolutional neural network (1D-CNN) digital
predistortion (DPD) technique to linearize RF power amplifier (PA)
real-time.The modular nature of our design enables DPD system adaptation for
variable resource and timing constraints.Our work also presents a co-simulation
architecture to verify the DPD performance with an actual power amplifier
hardware-in-the-loop.The experimental results with 100 MHz signals show that
the proposed 1D-CNN obtains superior performance compared with other neural
network architectures for real-time DPD application.
Related papers
- DPD-NeuralEngine: A 22-nm 6.6-TOPS/W/mm$^2$ Recurrent Neural Network Accelerator for Wideband Power Amplifier Digital Pre-Distortion [9.404504586344107]
DPD-NeuralEngine is an ultra-fast, tiny-area, and power-efficient DPD accelerator based on a Gated Recurrent Unit (GRU) neural network (NN)
Our 22 nm CMOS implementation operates at 2 GHz, capable of processing I/Q signals up to 250 MSps.
To our knowledge, this work represents the first AI-based DPD application-specific integrated circuit (ASIC) accelerator.
arXiv Detail & Related papers (2024-10-15T16:39:50Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Sustainable Diffusion-based Incentive Mechanism for Generative AI-driven Digital Twins in Industrial Cyber-Physical Systems [65.22300383287904]
Industrial Cyber-Physical Systems (ICPSs) are an integral component of modern manufacturing and industries.
By digitizing data throughout the product life cycle, Digital Twins (DTs) in ICPSs enable a shift from current industrial infrastructures to intelligent and adaptive infrastructures.
mechanisms that leverage sensing Industrial Internet of Things (IIoT) devices to share data for the construction of DTs are susceptible to adverse selection problems.
arXiv Detail & Related papers (2024-08-02T10:47:10Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - 1-bit Quantized On-chip Hybrid Diffraction Neural Network Enabled by Authentic All-optical Fully-connected Architecture [4.594367761345624]
This study introduces the Hybrid Diffraction Neural Network (HDNN), a novel architecture that incorporates matrix multiplication into DNNs.
utilizing a singular phase modulation layer and an amplitude modulation layer, the trained neural network demonstrated remarkable accuracies of 96.39% and 89% in digit recognition tasks.
arXiv Detail & Related papers (2024-04-11T02:54:17Z) - Neuromorphic Split Computing with Wake-Up Radios: Architecture and Design via Digital Twinning [97.99077847606624]
This work proposes a novel architecture that integrates a wake-up radio mechanism within a split computing system consisting of remote, wirelessly connected, NPUs.
A key challenge in the design of a wake-up radio-based neuromorphic split computing system is the selection of thresholds for sensing, wake-up signal detection, and decision making.
arXiv Detail & Related papers (2024-04-02T10:19:04Z) - LightCAM: A Fast and Light Implementation of Context-Aware Masking based
D-TDNN for Speaker Verification [3.3800597813242628]
Traditional Time Delay Neural Networks (TDNN) have achieved state-of-the-art performance at the cost of high computational complexity and slower inference speed.
We propose a fast and lightweight model, LightCAM, which further adopts a depthwise separable convolution module (DSM) and uses multi-scale feature aggregation (MFA) for feature fusion.
arXiv Detail & Related papers (2024-02-08T21:47:16Z) - On Neural Architectures for Deep Learning-based Source Separation of
Co-Channel OFDM Signals [104.11663769306566]
We study the single-channel source separation problem involving frequency-division multiplexing (OFDM) signals.
We propose critical domain-informed modifications to the network parameterization, based on insights from OFDM structures.
arXiv Detail & Related papers (2023-03-11T16:29:13Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Large-scale neuromorphic optoelectronic computing with a reconfigurable
diffractive processing unit [38.898230519968116]
We propose an optoelectronic reconfigurable computing paradigm by constructing a diffractive processing unit.
It can efficiently support different neural networks and achieve a high model complexity with millions of neurons.
Our prototype system built with off-the-shelf optoelectronic components surpasses the performance of state-of-the-art graphics processing units.
arXiv Detail & Related papers (2020-08-26T16:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.