AlDBaran: Towards Blazingly Fast State Commitments for Blockchains
- URL: http://arxiv.org/abs/2508.10493v1
- Date: Thu, 14 Aug 2025 09:52:15 GMT
- Title: AlDBaran: Towards Blazingly Fast State Commitments for Blockchains
- Authors: Bernhard Kauer, Aleksandr Petrosyan, Benjamin Livshits,
- Abstract summary: AlDBaran is an authenticated data structure capable of handling state updates efficiently at a network throughput of 50 Gbps.<n>AlDBaran provides support for historical state proofs, which facilitates a wide array of novel applications.<n>On consumer-level portable hardware, it achieves approximately 8 million updates/s in an in-memory setting and 5 million updates/s with snapshots at sub-second intervals.
- Score: 52.39305978984572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fundamental basis for maintaining integrity within contemporary blockchain systems is provided by authenticated databases. Our analysis indicates that a significant portion of the approaches applied in this domain fail to sufficiently meet the stringent requirements of systems processing transactions at rates of multi-million TPS. AlDBaran signifies a substantial advancement in authenticated databases. By eliminating disk I/O operations from the critical path, implementing prefetching strategies, and refining the update mechanism of the Merkle tree, we have engineered an authenticated data structure capable of handling state updates efficiently at a network throughput of 50 Gbps. This throughput capacity significantly surpasses any empirically documented blockchain throughput, guaranteeing the ability of even the most high-throughput blockchains to generate state commitments effectively. AlDBaran provides support for historical state proofs, which facilitates a wide array of novel applications. For instance, the deployment of AlDBaran could enable blockchains that do not currently support state commitments to offer functionalities for light clients and/or implement rollups. When benchmarked against alternative authenticated data structure projects, AlDBaran exhibits superior performance and simplicity. In particular, AlDBaran achieves speeds of approximately 48 million updates per second using an identical machine configuration. This characteristic renders AlDBaran an attractive solution for resource-limited environments, as its historical data capabilities can be modularly isolated (and deactivated), which further enhances performance. On consumer-level portable hardware, it achieves approximately 8 million updates/s in an in-memory setting and 5 million updates/s with snapshots at sub-second intervals, illustrating compelling and cost-effective scalability.
Related papers
- Towards Efficient Agents: A Co-Design of Inference Architecture and System [66.59916327634639]
This paper presents AgentInfer, a unified framework for end-to-end agent acceleration.<n>We decompose the problem into four synergistic components: AgentCollab, AgentSched, AgentSAM, and AgentCompress.<n>Experiments on the BrowseComp-zh and DeepDiver benchmarks demonstrate that through the synergistic collaboration of these methods, AgentInfer reduces ineffective token consumption by over 50%.
arXiv Detail & Related papers (2025-12-20T12:06:13Z) - Fast-dLLM v2: Efficient Block-Diffusion LLM [64.38006546510337]
Fast-dLLM v2 is a block diffusion language model that adapts pretrained AR models into dLLMs for parallel text generation.<n>This represents a 500x reduction in training data compared to full-attention diffusion LLMs such as Dream (580B tokens)
arXiv Detail & Related papers (2025-09-30T14:40:18Z) - Ratio1 -- AI meta-OS [35.18016233072556]
Ratio1 is a decentralized MLOps protocol that unifies AI model development, deployment, and inference across heterogeneous edge devices.<n>Its key innovation is an integrated blockchain-based framework that transforms idle computing resources into a trustless global supercomputer.
arXiv Detail & Related papers (2025-09-05T07:41:54Z) - VerlTool: Towards Holistic Agentic Reinforcement Learning with Tool Use [78.29315418819074]
We introduce VerlTool, a unified and modular framework that addresses limitations through systematic design principles.<n>Our framework formalizes ARLT as multi-turn trajectories with multi-modal observation tokens (text/image/video), extending beyond single-turn RLVR paradigms.<n>The modular plugin architecture enables rapid tool integration requiring only lightweight Python definitions.
arXiv Detail & Related papers (2025-09-01T01:45:18Z) - Scaling Linear Attention with Sparse State Expansion [58.161410995744596]
Transformer architecture struggles with long-context scenarios due to quadratic computation and linear memory growth.<n>We introduce a row-sparse update formulation for linear attention by conceptualizing state updating as information classification.<n>Second, we present Sparse State Expansion (SSE) within the sparse framework, which expands the contextual state into multiple partitions.
arXiv Detail & Related papers (2025-07-22T13:27:31Z) - Lightweight and High-Throughput Secure Logging for Internet of Things and Cold Cloud Continuum [2.156208381257605]
We present Parallel Optimal Signatures for Secure Logging (POSLO), a novel digital signature framework.<n>POSLO offers constantsize signatures and public keys, near-optimal signing efficiency, and fine-to-coarse tunable verification for log auditing.<n>For example, POSLO can verify 231 log entries per second on a mid-range consumer GPU while being significantly more compact than state-of-the-art.
arXiv Detail & Related papers (2025-06-10T13:26:36Z) - Compress, Gather, and Recompute: REFORMing Long-Context Processing in Transformers [58.98923344096319]
REFORM is a novel inference framework that efficiently handles long contexts through a two-phase approach.<n>It achieves over 50% and 27% performance gains on RULER and BABILong respectively at 1M context length.<n>It also outperforms baselines on Infinite-Bench and MM-NIAH, demonstrating flexibility across diverse tasks and domains.
arXiv Detail & Related papers (2025-06-01T23:49:14Z) - LogStamping: A blockchain-based log auditing approach for large-scale systems [1.1720477891411445]
This paper presents a blockchain-based log management framework.<n>The framework integrates a hybrid on-chain and off-chain storage model.<n>It is designed to handle the high-volume log generation typical in large-scale systems.
arXiv Detail & Related papers (2025-05-22T19:27:44Z) - vApps: Verifiable Applications at Internet Scale [2.931173822616461]
Verifiable Applications (vApps) is a novel development framework designed to streamline the creation and deployment of verifiable computing applications.<n>vApps offer a unified Rust-based Domain-Specific Language ( DSL) within a comprehensive SDK.<n>This eases the developer's burden in securing diverse software components, allowing them to focus on application logic.
arXiv Detail & Related papers (2025-04-21T02:19:06Z) - AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation [48.82264764771652]
We introduce AsCAN -- a hybrid architecture, combining both convolutional and transformer blocks.
AsCAN supports a variety of tasks: recognition, segmentation, class-conditional image generation.
We then scale the same architecture to solve a large-scale text-to-image task and show state-of-the-art performance.
arXiv Detail & Related papers (2024-11-07T18:43:17Z) - Novel Architecture for Distributed Travel Data Integration and Service Provision Using Microservices [1.03590082373586]
This paper introduces an architecture for enhancing the flexibility and performance of an airline reservation system.
The design incorporates Redis cache technologies, two different messaging systems (Kafka and RabbitMQ), two types of architectural storages (MongoDB, and Docker)
The architecture provides an impressive level of data consistency at 99.5% and a latency of data propagation of less than 75 ms.
arXiv Detail & Related papers (2024-10-31T17:41:14Z) - FANTAstic SEquences and Where to Find Them: Faithful and Efficient API Call Generation through State-tracked Constrained Decoding and Reranking [57.53742155914176]
API call generation is the cornerstone of large language models' tool-using ability.
Existing supervised and in-context learning approaches suffer from high training costs, poor data efficiency, and generated API calls that can be unfaithful to the API documentation and the user's request.
We propose an output-side optimization approach called FANTASE to address these limitations.
arXiv Detail & Related papers (2024-07-18T23:44:02Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.