TCN-DPD: Parameter-Efficient Temporal Convolutional Networks for Wideband Digital Predistortion
- URL: http://arxiv.org/abs/2506.12165v1
- Date: Fri, 13 Jun 2025 18:30:32 GMT
- Title: TCN-DPD: Parameter-Efficient Temporal Convolutional Networks for Wideband Digital Predistortion
- Authors: Huanqiang Duan, Manno Versluis, Qinyu Chen, Leo C. N. de Vreede, Chang Gao,
- Abstract summary: TCN-DPD is a parameter-efficient architecture based on temporal convolutional networks.<n>It achieves simulated ACPRs of -51.58/-49.26 dBc (L/R), EVM of -47.52 dB, and NMSE of -44.61 dB with 500 parameters.<n>It maintains superior linearization than prior models down to 200 parameters, making it promising for efficient wideband PA linearization.
- Score: 3.6966254731864727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital predistortion (DPD) is essential for mitigating nonlinearity in RF power amplifiers, particularly for wideband applications. This paper presents TCN-DPD, a parameter-efficient architecture based on temporal convolutional networks, integrating noncausal dilated convolutions with optimized activation functions. Evaluated on the OpenDPD framework with the DPA_200MHz dataset, TCN-DPD achieves simulated ACPRs of -51.58/-49.26 dBc (L/R), EVM of -47.52 dB, and NMSE of -44.61 dB with 500 parameters and maintains superior linearization than prior models down to 200 parameters, making it promising for efficient wideband PA linearization.
Related papers
- OpenDPDv2: A Unified Learning and Optimization Framework for Neural Network Digital Predistortion [10.484441707788127]
This paper presents OpenDPDv2, a unified framework for PA modeling, DPD learning, and model optimization.<n> optimization techniques feature a novel DPD algorithm, TRes-DeltaGRU, alongside two energy-efficient methods.
arXiv Detail & Related papers (2025-07-09T13:54:47Z) - DeltaDPD: Exploiting Dynamic Temporal Sparsity in Recurrent Neural Networks for Energy-Efficient Wideband Digital Predistortion [11.598016224384875]
Digital Predistortion (DPD) is a popular technique to enhance signal quality in wideband RF power amplifiers (PAs)<n>This paper introduces DeltaDPD, exploring the dynamic temporal sparsity of input signals and neuronal hidden states in RNNs for energy-efficient DPD.
arXiv Detail & Related papers (2025-04-29T10:07:52Z) - VP-NTK: Exploring the Benefits of Visual Prompting in Differentially Private Data Synthesis [48.75967507528161]
Differentially private (DP) synthetic data has become the de facto standard for releasing sensitive data.<n>One of the emerging techniques in parameter efficient fine-tuning (PEFT) is visual prompting (VP)<n>We show that VP in conjunction with DP-NTK, a DP generator that exploits the power of the neural tangent kernel (NTK) in training DP generative models, achieves a significant performance boost.
arXiv Detail & Related papers (2025-03-20T14:42:11Z) - BiDM: Pushing the Limit of Quantization for Diffusion Models [60.018246440536814]
This paper proposes a novel method, namely BiDM, for fully binarizing weights and activations of DMs, pushing quantization to the 1-bit limit.<n>As the first work to fully binarize DMs, the W1A1 BiDM on the LDM-4 model for LSUN-Bedrooms 256$times$256 achieves a remarkable FID of 22.74.
arXiv Detail & Related papers (2024-12-08T12:45:21Z) - DPD-NeuralEngine: A 22-nm 6.6-TOPS/W/mm$^2$ Recurrent Neural Network Accelerator for Wideband Power Amplifier Digital Pre-Distortion [9.404504586344107]
DPD-NeuralEngine is an ultra-fast, tiny-area, and power-efficient DPD accelerator based on a Gated Recurrent Unit (GRU) neural network (NN)<n>Our 22 nm CMOS implementation operates at 2 GHz, capable of processing I/Q signals up to 250 MSps.<n>To our knowledge, this work represents the first AI-based DPD application-specific integrated circuit (ASIC) accelerator.
arXiv Detail & Related papers (2024-10-15T16:39:50Z) - MP-DPD: Low-Complexity Mixed-Precision Neural Networks for Energy-Efficient Digital Predistortion of Wideband Power Amplifiers [8.58564278168083]
Digital Pre-Distortion (DPD) enhances signal quality in wideband RF power amplifiers (PAs)
This paper introduces open-source mixed-precision (MP) neural networks that employ quantized low-precision fixed-point parameters for energy-efficient DPD.
arXiv Detail & Related papers (2024-04-18T21:04:39Z) - OpenDPD: An Open-Source End-to-End Learning & Benchmarking Framework for
Wideband Power Amplifier Modeling and Digital Pre-Distortion [2.6771785584103935]
Deep neural networks (DNN) for digital pre-distortion (DPD) have become prominent.
This paper presents an open-source framework, OpenDPD, crafted in PyTorch.
We introduce a Dense Gated Recurrent Unit (DGRU)-DPD, trained via a novel end-to-end learning architecture.
arXiv Detail & Related papers (2024-01-16T12:36:17Z) - On Neural Architectures for Deep Learning-based Source Separation of
Co-Channel OFDM Signals [104.11663769306566]
We study the single-channel source separation problem involving frequency-division multiplexing (OFDM) signals.
We propose critical domain-informed modifications to the network parameterization, based on insights from OFDM structures.
arXiv Detail & Related papers (2023-03-11T16:29:13Z) - Continual Spatio-Temporal Graph Convolutional Networks [87.86552250152872]
We reformulating the Spatio-Temporal Graph Convolutional Neural Network as a Continual Inference Network.
We observe up to 109x reduction in time complexity, on- hardware accelerations of 26x, and reductions in maximum allocated memory of 52% during online inference.
arXiv Detail & Related papers (2022-03-21T14:23:18Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.