Single Round-trip Hierarchical ORAM via Succinct Indices
- URL: http://arxiv.org/abs/2208.07489v3
- Date: Thu, 13 Jun 2024 00:16:38 GMT
- Title: Single Round-trip Hierarchical ORAM via Succinct Indices
- Authors: William Holland, Olga Ohrimenko, Anthony Wirth,
- Abstract summary: Rank ORAM can retrieve data with a single round-trip of communication.
A emphcompressed client-side data structure stores, implicitly, the location of each element at the server.
- Score: 5.437298646956505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Access patterns to data stored remotely create a side channel that is known to leak information even if the content of the data is encrypted. To protect against access pattern leakage, Oblivious RAM is a cryptographic primitive that obscures the (actual) access trace at the expense of additional access and periodic shuffling of the server's contents. A class of ORAM solutions, known as Hierarchical ORAM, has achieved theoretically \emph{optimal} logarithmic bandwidth overhead. However, to date, Hierarchical ORAMs are seen as only theoretical artifacts. This is because they require a large number of communication round-trips to locate (shuffled) elements at the server and involve complex building blocks such as cuckoo hash tables. To address the limitations of Hierarchical ORAM schemes in practice, we introduce Rank ORAM; the first Hierarchical ORAM that can retrieve data with a single round-trip of communication (as compared to a logarithmic number in previous work). To support non-interactive communication, we introduce a \emph{compressed} client-side data structure that stores, implicitly, the location of each element at the server. In addition, this location metadata enables a simple protocol design that dispenses with the need for complex cuckoo hash tables. Rank ORAM requires asymptotically smaller memory than existing (non-Hierarchical) state-of-the-art practical ORAM schemes (e.g., Ring ORAM) while maintaining comparable bandwidth performance. Our experiments on real network file-system traces demonstrate a reduction in client memory, against existing approaches, of a factor of~$100$. For example, when {outsourcing} a database of $17.5$TB, required client-memory is only $290$MB vs. $40$GB for standard approaches.
Related papers
- Palermo: Improving the Performance of Oblivious Memory using Protocol-Hardware Co-Design [13.353250150074066]
Oblivious RAM (ORAM) hides the memory access patterns, enhancing data privacy by preventing attackers from discovering sensitive information.
The performance of ORAM is often limited by its inherent trade-off between security and efficiency.
This paper presents Palermo: a protocol- hardware co-design to improve ORAM performance.
arXiv Detail & Related papers (2024-11-08T08:31:12Z) - BitStack: Fine-Grained Size Control for Compressed Large Language Models in Variable Memory Environments [53.71158537264695]
Large language models (LLMs) have revolutionized numerous applications, yet their deployment remains challenged by memory constraints on local devices.
We introduce textbfBitStack, a novel, training-free weight compression approach that enables megabyte-level trade-offs between memory usage and model performance.
arXiv Detail & Related papers (2024-10-31T13:26:11Z) - Optimal Offline ORAM with Perfect Security via Simple Oblivious Priority Queues [0.0]
We study the so-called offline ORAM in which the sequence of memory access locations to be hidden is known in advance.
We obtain the first optimal offline ORAM with perfect security from oblivious priority queues via time-forward processing.
Building on our construction, we additionally present efficient external-memory instantiations of our oblivious, perfectly-secure construction.
arXiv Detail & Related papers (2024-09-18T14:31:33Z) - H$_2$O$_2$RAM: A High-Performance Hierarchical Doubly Oblivious RAM [14.803814604985957]
Oblivious RAM (ORAM) with Trusted Execution Environments (TEE) has found numerous real-world applications due to their complementary nature.
We introduce several new efficient oblivious components to build a high-performance hierarchical O$$RAM (H$$O$RAM)
The results indicate that H$$O$RAM reduces execution time by up to $sim 103$ times and saves memory usage by $5sim44$ times compared to stateoftheart solutions.
arXiv Detail & Related papers (2024-09-11T10:31:14Z) - STT-RAM-based Hierarchical In-Memory Computing [1.1470070927586018]
In-memory computing promises to overcome the von Neumann bottleneck in computer systems by performing computations directly within the memory.
Previous research has suggested using Spin-Transfer Torque RAM (STT-RAM) for in-memory computing due to its non-volatility, low leakage power, high density, endurance, and commercial viability.
This paper explores hierarchical in-memory computing, where different levels of the memory hierarchy are augmented with processing elements to optimize workload execution.
arXiv Detail & Related papers (2024-07-29T01:43:26Z) - Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs [61.40047491337793]
We present Hierarchical cOntext MERging (HOMER), a new training-free scheme designed to overcome the limitations of large language models.
HomeR uses a divide-and-conquer algorithm, dividing long inputs into manageable chunks.
A token reduction technique precedes each merging, ensuring memory usage efficiency.
arXiv Detail & Related papers (2024-04-16T06:34:08Z) - RelayAttention for Efficient Large Language Model Serving with Long System Prompts [59.50256661158862]
This paper aims to improve the efficiency of LLM services that involve long system prompts.
handling these system prompts requires heavily redundant memory accesses in existing causal attention algorithms.
We propose RelayAttention, an attention algorithm that allows reading hidden states from DRAM exactly once for a batch of input tokens.
arXiv Detail & Related papers (2024-02-22T18:58:28Z) - Topology-aware Embedding Memory for Continual Learning on Expanding Networks [63.35819388164267]
We present a framework to tackle the memory explosion problem using memory replay techniques.
PDGNNs with Topology-aware Embedding Memory (TEM) significantly outperform state-of-the-art techniques.
arXiv Detail & Related papers (2024-01-24T03:03:17Z) - Efficient and Error-Resilient Data Access Protocols for a Limited-Sized
Quantum Random Access Memory [7.304498344470287]
We focus on the access of larger data sizes without keeping on increasing the size of the QRAM.
We propose a novel protocol for loading data with larger word lengths $k$ without increasing the number of QRAM levels $n$.
By exploiting the parallelism in the data query process, our protocol achieves a time complexity of $O(n+k)$ and improves error scaling performance.
arXiv Detail & Related papers (2023-03-09T12:21:18Z) - NumS: Scalable Array Programming for the Cloud [82.827921577004]
We present NumS, an array programming library which optimize NumPy-like expressions on task-based distributed systems.
This is achieved through a novel scheduler called Load Simulated Hierarchical Scheduling (LSHS)
We show that LSHS enhances performance on Ray by decreasing network load by a factor of 2x, requiring 4x less memory, and reducing execution time by 10x on the logistic regression problem.
arXiv Detail & Related papers (2022-06-28T20:13:40Z) - Parallelising the Queries in Bucket Brigade Quantum RAM [69.43216268165402]
Quantum algorithms often use quantum RAMs (QRAM) for accessing information stored in a database-like manner.
We show a systematic method to significantly reduce the effective query time by using Clifford+T gate parallelism.
We conclude that, in theory, fault-tolerant bucket brigade quantum RAM queries can be performed approximately with the speed of classical RAM.
arXiv Detail & Related papers (2020-02-21T14:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.