DPQ-HD: Post-Training Compression for Ultra-Low Power Hyperdimensional Computing
- URL: http://arxiv.org/abs/2505.05413v1
- Date: Thu, 08 May 2025 16:54:48 GMT
- Title: DPQ-HD: Post-Training Compression for Ultra-Low Power Hyperdimensional Computing
- Authors: Nilesh Prasad Pandey, Shriniwas Kulkarni, David Wang, Onat Gungor, Flavio Ponzina, Tajana Rosing,
- Abstract summary: We propose a novel Post Training Compression algorithm, Decomposition-Pruning-Quantization (DPQ-HD)<n>DPQ-HD reduces computational and memory overhead by uniquely combining the above three compression techniques.<n>We demonstrate that DPQ-HD achieves up to 20-100x reduction in memory for image and graph classification tasks with only a 1-2% drop in accuracy.
- Score: 6.378578005171813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperdimensional Computing (HDC) is emerging as a promising approach for edge AI, offering a balance between accuracy and efficiency. However, current HDC-based applications often rely on high-precision models and/or encoding matrices to achieve competitive performance, which imposes significant computational and memory demands, especially for ultra-low power devices. While recent efforts use techniques like precision reduction and pruning to increase the efficiency, most require retraining to maintain performance, making them expensive and impractical. To address this issue, we propose a novel Post Training Compression algorithm, Decomposition-Pruning-Quantization (DPQ-HD), which aims at compressing the end-to-end HDC system, achieving near floating point performance without the need of retraining. DPQ-HD reduces computational and memory overhead by uniquely combining the above three compression techniques and efficiently adapts to hardware constraints. Additionally, we introduce an energy-efficient inference approach that progressively evaluates similarity scores such as cosine similarity and performs early exit to reduce the computation, accelerating prediction inference while maintaining accuracy. We demonstrate that DPQ-HD achieves up to 20-100x reduction in memory for image and graph classification tasks with only a 1-2% drop in accuracy compared to uncompressed workloads. Lastly, we show that DPQ-HD outperforms the existing post-training compression methods and performs better or at par with retraining-based state-of-the-art techniques, requiring significantly less overall optimization time (up to 100x) and faster inference (up to 56x) on a microcontroller
Related papers
- ScalableHD: Scalable and High-Throughput Hyperdimensional Computing Inference on Multi-Core CPUs [0.0]
Hyperdimensional Computing (HDC) represents and manipulates information using high-dimensional vectors, called hypervectors (HV)<n>Traditional HDC methods rely on single-pass, non-parametric training and often suffer from low accuracy.<n>Inference, however, remains lightweight and well-suited for real-time execution.
arXiv Detail & Related papers (2025-06-10T22:46:12Z) - ReCalKV: Low-Rank KV Cache Compression via Head Reordering and Offline Calibration [81.81027217759433]
Large language models (LLMs) are often constrained by the excessive memory required to store the Key-Value ( KV) cache.<n>Recent methods have explored reducing the hidden dimensions of the KV cache, but many introduce additional computation through projection layers.<n>We propose ReCalKV, a post-training KV cache compression method that reduces the hidden dimensions of the KV cache.
arXiv Detail & Related papers (2025-05-30T08:49:27Z) - Image Coding for Machines via Feature-Preserving Rate-Distortion Optimization [27.97760974010369]
We show an approach to reduce the effect of compression on a task loss using the distance between features as a distortion metric.<n>We simplify the RDO formulation to make the distortion term computable using block-based encoders.<n>We show up to 10% bit-rate savings for the same computer vision accuracy compared to RDO based on SSE.
arXiv Detail & Related papers (2025-04-03T02:11:26Z) - PCGS: Progressive Compression of 3D Gaussian Splatting [55.149325473447384]
We propose PCGS (Progressive Compression of 3D Gaussian Splatting), which adaptively controls both the quantity and quality of Gaussians.<n>Overall, PCGS achieves progressivity while maintaining compression performance comparable to SoTA non-progressive methods.
arXiv Detail & Related papers (2025-03-11T15:01:11Z) - Efficient Distributed Training through Gradient Compression with Sparsification and Quantization Techniques [3.6481248057068174]
Using top-k and DGC at 50 times compression yields performance improvements, reducing perplexity by up to 0.06 compared to baseline.<n>Communication times are reduced across all compression methods, with top-k and DGC decreasing communication to negligible levels at high compression ratios.
arXiv Detail & Related papers (2024-12-07T22:55:55Z) - Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - LoCo: Low-Bit Communication Adaptor for Large-scale Model Training [63.040522637816906]
Low-bit communication often degrades training quality due to compression information loss.<n>We propose Low-bit Communication Adaptor (LoCo), which compensates local local GPU nodes before, without compromising quality.<n> Experimental results show that across moving large-scale training model frameworks like Megatron-LM and PyTorchs FSDP, LoCo significantly improves compression communication efficiency.
arXiv Detail & Related papers (2024-07-05T13:01:36Z) - ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large Language Models [14.310720048047136]
ALPS is an optimization-based framework that tackles the pruning problem using the operator splitting technique and a preconditioned gradient conjugate-based post-processing step.
Our approach incorporates novel techniques to accelerate and theoretically guarantee convergence while leveraging vectorization and GPU parallelism for efficiency.
On the OPT-30B model with 70% sparsity, ALPS achieves a 13% reduction in test perplexity on the WikiText dataset and a 19% improvement in zero-shot benchmark performance compared to existing methods.
arXiv Detail & Related papers (2024-06-12T02:57:41Z) - MicroHD: An Accuracy-Driven Optimization of Hyperdimensional Computing Algorithms for TinyML systems [8.54897708375791]
Hyperdimensional computing (HDC) is emerging as a promising AI approach that can effectively target TinyML applications.
Previous works on HDC showed that limiting the standard 10k dimensions of the hyperdimensional space to much lower values is possible.
arXiv Detail & Related papers (2024-03-24T02:45:34Z) - Retraining-free Model Quantization via One-Shot Weight-Coupling Learning [41.299675080384]
Mixed-precision quantization (MPQ) is advocated to compress the model effectively by allocating heterogeneous bit-width for layers.
MPQ is typically organized into a searching-retraining two-stage process.
In this paper, we devise a one-shot training-searching paradigm for mixed-precision model compression.
arXiv Detail & Related papers (2024-01-03T05:26:57Z) - An Information Theory-inspired Strategy for Automatic Network Pruning [88.51235160841377]
Deep convolution neural networks are well known to be compressed on devices with resource constraints.
Most existing network pruning methods require laborious human efforts and prohibitive computation resources.
We propose an information theory-inspired strategy for automatic model compression.
arXiv Detail & Related papers (2021-08-19T07:03:22Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.