Gotta Hash 'Em All! Speeding Up Hash Functions for Zero-Knowledge Proof Applications
- URL: http://arxiv.org/abs/2501.18780v1
- Date: Thu, 30 Jan 2025 22:09:05 GMT
- Title: Gotta Hash 'Em All! Speeding Up Hash Functions for Zero-Knowledge Proof Applications
- Authors: Nojan Sheybani, Tengkai Gong, Anees Ahmed, Nges Brian Njungle, Michel Kinsy, Farinaz Koushanfar,
- Abstract summary: We present HashEmAll, a novel collection of FPGA-based realizations of three ZK-friendly hash functions.
HashEmAll outperforms CPU implementations by up to $23times$ with lower power consumption and compatibility with accessible FPGAs.
- Score: 11.345012996735543
- License:
- Abstract: Collision-resistant cryptographic hash functions (CRHs) are crucial for security in modern systems but are optimized for standard CPUs. While heavily used in zero-knowledge proof (ZKP) applications, traditional CRHs are inefficient in the ZK domain. ZK-friendly hashes have been developed but struggle on consumer hardware due to a lack of specialized ZK-specific hardware. To address this, we present HashEmAll, a novel collection of FPGA-based realizations of three ZK-friendly hash functions: Griffin, Rescue-Prime, and Reinforced Concrete. Each hash offers different optimization focuses, allowing users to choose based on the constraints of their applications. Through our ZK-optimized arithmetic functions on reconfigurable hardware, HashEmAll outperforms CPU implementations by up to $23\times$ with lower power consumption and compatibility with accessible FPGAs.
Related papers
- HashAttention: Semantic Sparsity for Faster Inference [91.54218318798603]
HashAttention is a principled approach casting pivotal token identification as a recommendation problem.
It efficiently identifies pivotal tokens for a given query in this Hamming space using bitwise operations.
It can reduce the number of tokens used by a factor of $1/32times$ for the Llama-3.1-8B model with LongBench.
arXiv Detail & Related papers (2024-12-19T02:34:15Z) - if-ZKP: Intel FPGA-Based Acceleration of Zero Knowledge Proofs [3.0009885036586725]
We present a novel scalable architecture that is suitable for accelerating the zk-SNARK prover compute on FPGAs.
We focus on the multi-scalar multiplication (MSM) that accounts for the majority of time spent in zk-SNARK systems.
Our implementation runs 110x-150x faster compared to reference software library.
arXiv Detail & Related papers (2024-12-17T02:35:32Z) - AMAZE: Accelerated MiMC Hardware Architecture for Zero-Knowledge Applications on the Edge [10.803274987172035]
cryptographic hash (CRH) functions have long been an integral part of providing security and privacy in modern systems.
Certain constructions of zero-knowledge proof (ZKP) protocols aim to utilize CRH functions to perform cryptographic hashing.
Standard CRH functions, such as SHA2, are inefficient when employed in the ZKP domain.
Most mature ZK-friendly hash, MiMC, presents a block cipher and hash function with a simple algebraic structure.
arXiv Detail & Related papers (2024-11-10T03:55:08Z) - Benchmarking ZK-Friendly Hash Functions and SNARK Proving Systems for EVM-compatible Blockchains [7.520993886306112]
We benchmarked three SNARK proving systems and five ZK-friendly hash functions, including our self-developed circuit templates for Poseidon2, Neptune, and GMiMC.
Our work provides a benchmark for ZK-friendly hash functions and ZK tools, while also exploring cost efficiency and compliance in ZKP-based privacy-preserving transaction protocols.
arXiv Detail & Related papers (2024-09-03T15:19:47Z) - Performance Evaluation of Hashing Algorithms on Commodity Hardware [0.0]
This report presents a performance evaluation of three popular hashing algorithms Blake3, SHA-256, and SHA-512.
These hashing algorithms are widely used in various applications, such as digital signatures, message authentication, and password storage.
The results of the evaluation show Blake3 generally outperforms both SHA-256 and SHA-512 in terms of throughput and latency.
arXiv Detail & Related papers (2024-07-11T08:31:02Z) - Extreme Compression of Large Language Models via Additive Quantization [59.3122859349777]
Our algorithm, called AQLM, generalizes the classic Additive Quantization (AQ) approach for information retrieval.
We provide fast GPU and CPU implementations of AQLM for token generation, which enable us to match or outperform optimized FP16 implementations for speed.
arXiv Detail & Related papers (2024-01-11T18:54:44Z) - A Lower Bound of Hash Codes' Performance [122.88252443695492]
In this paper, we prove that inter-class distinctiveness and intra-class compactness among hash codes determine the lower bound of hash codes' performance.
We then propose a surrogate model to fully exploit the above objective by estimating the posterior of hash codes and controlling it, which results in a low-bias optimization.
By testing on a series of hash-models, we obtain performance improvements among all of them, with an up to $26.5%$ increase in mean Average Precision and an up to $20.5%$ increase in accuracy.
arXiv Detail & Related papers (2022-10-12T03:30:56Z) - ASH: A Modern Framework for Parallel Spatial Hashing in 3D Perception [91.24236600199542]
ASH is a modern and high-performance framework for parallel spatial hashing on GPU.
ASH achieves higher performance, supports richer functionality, and requires fewer lines of code.
ASH and its example applications are open sourced in Open3D.
arXiv Detail & Related papers (2021-10-01T16:25:40Z) - Quantum collision finding for homomorphic hash functions [0.0]
We present concrete attack examples to provable hash functions, including a preimage attack to $oplus$-linear hash functions.
Hash functions which are additive or multiplicative are vulnerable to a quantum attack using the hidden subgroup problem algorithm for quantum computers.
arXiv Detail & Related papers (2021-07-30T23:01:02Z) - SMYRF: Efficient Attention using Asymmetric Clustering [103.47647577048782]
We propose a novel type of balanced clustering algorithm to approximate attention.
SMYRF can be used as a drop-in replacement for dense attention layers without any retraining.
We show that SMYRF can be used interchangeably with dense attention before and after training.
arXiv Detail & Related papers (2020-10-11T18:49:17Z) - Hashing-based Non-Maximum Suppression for Crowded Object Detection [63.761451382081844]
We propose an algorithm, named hashing-based non-maximum suppression (HNMS) to efficiently suppress the non-maximum boxes for object detection.
For two-stage detectors, we replace NMS in region proposal network with HNMS, and observe significant speed-up with comparable accuracy.
Experiments are conducted on CARPK, SKU-110K, CrowdHuman datasets to demonstrate the efficiency and effectiveness of HNMS.
arXiv Detail & Related papers (2020-05-22T23:45:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.