180 Days After EIP-4844: Will Blob Sharing Solve Dilemma for Small Rollups?
- URL: http://arxiv.org/abs/2410.04111v2
- Date: Fri, 11 Oct 2024 18:10:24 GMT
- Title: 180 Days After EIP-4844: Will Blob Sharing Solve Dilemma for Small Rollups?
- Authors: Suhyeon Lee,
- Abstract summary: This paper examines the effectiveness of blob sharing based on real-world data collected six months after the implementation of EIP-4844.
By simulating cost changes using a simple blob sharing format, we demonstrate that blob sharing can substantially improve the costs and DA service quality for small rollups.
- Score: 2.88268082568407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The introduction of blobs through EIP-4844 has significantly reduced the Data Availability (DA) costs for rollups on Ethereum. However, due to the fixed size of blobs at 128 KB, rollups with low data throughput face a dilemma: they either use blobs inefficiently or decrease the frequency of DA submissions. Blob sharing, where multiple rollups share a single blob, has been proposed as a solution to this problem. This paper examines the effectiveness of blob sharing based on real-world data collected approximately six months after the implementation of EIP-4844. By simulating cost changes using a simple blob sharing format, we demonstrate that blob sharing can substantially improve the costs and DA service quality for small rollups, effectively resolving their dilemma. Notably, we observed cost reductions in USD exceeding 85% for most of the rollups when they cooperate, attributable to the smoothing effect of the blob base fee achieved through blob sharing.
Related papers
- ME: Trigger Element Combination Backdoor Attack on Copyright Infringement [76.0062084678398]
SilentBadDiffusion (SBD) is a method proposed recently, which shew outstanding performance in attacking SD in text-to-image tasks.<n>In this paper, we raised new datasets accessible for researching in attacks like SBD, and proposed Multi-Element (ME) attack method based on SBD.<n>The Copyright Infringement Rate (CIR) / First Attack Epoch (FAE) we got on the two new datasets were 16.78% / 39.50 and 51.20% / 23.60, respectively close to or even outperformed benchmark Pokemon and Mijourney datasets.
arXiv Detail & Related papers (2025-06-12T14:51:27Z) - Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval [54.68474647525667]
We prune 8 out of 15 datasets from the BGE collection and increase nDCG@10 on BEIR by 1.0 point.<n>We propose a simple, cost-effective approach using cascading LLM prompts to identify and relabel hard negatives.<n>Similar gains are observed for rerankers fine-tuned on the relabeled data, such as Qwen2.5-3B on BEIR.
arXiv Detail & Related papers (2025-05-22T17:47:57Z) - Resource-Efficient Federated Fine-Tuning Large Language Models for Heterogeneous Data [16.844142562389443]
Fine-tuning large language models (LLMs) via federated learning, i.e., FedLLM, has been proposed to adapt LLMs for various downstream applications in a privacy-preserving way.
To reduce the fine-tuning costs on resource-constrained devices, FedLoRA is proposed to fine-tune only a small subset of model parameters by integrating low-rank adaptation (LoRA) into FedLLM.
Here, we propose a hierarchical FedLoRA framework, termed HierFedLoRA, to address these challenges.
arXiv Detail & Related papers (2025-03-27T07:05:22Z) - Two Sides of the Same Coin: Large-scale Measurements of Builder and Rollup after EIP-4844 [13.8621035326112]
We study emerging strategies in builder and rollup markets after EIP-4844, containing hundred million transactions.
We find that the efficiency of builder and rollup strategies is interdependent, akin to two sides of the same coin -- both cannot be optimized simultaneously.
arXiv Detail & Related papers (2024-11-06T13:09:23Z) - Diffusion Soup: Model Merging for Text-to-Image Diffusion Models [90.01635703779183]
We present Diffusion Soup, a compartmentalization method for Text-to-Image Generation that averages the weights of diffusion models trained on sharded data.
By construction, our approach enables training-free continual learning and unlearning with no additional memory or inference costs.
arXiv Detail & Related papers (2024-06-12T17:16:16Z) - CLIP the Bias: How Useful is Balancing Data in Multimodal Learning? [72.19502317793133]
We study the effectiveness of data-balancing for mitigating biases in contrastive language-image pretraining (CLIP)
We present a novel algorithm, called Multi-Modal Moment Matching (M4), designed to reduce both representation and association biases.
arXiv Detail & Related papers (2024-03-07T14:43:17Z) - EBFT: Effective and Block-Wise Fine-Tuning for Sparse LLMs [68.41135269685576]
Existing methods for fine-tuning sparse LLMs often suffer from resource-intensive requirements and high retraining costs.
We propose an efficient and fast framework for fine-tuning sparse LLMs based on minimizing reconstruction error.
Our approach involves sampling a small dataset for calibration and utilizing backpropagation to iteratively optimize block-wise reconstruction error.
arXiv Detail & Related papers (2024-02-19T09:55:32Z) - Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes [68.86687117368247]
We introduce Bonsai, a gradient-free structured pruning method that eliminates the need for backpropagation.
Bonsai achieves better compression with fewer resources, but also produces models that are twice as fast as those generated by semi-structured pruning.
Our results show that removing backprop as a requirement can also lead to state-of-the-art efficiency and performance.
arXiv Detail & Related papers (2024-02-08T04:48:26Z) - The Cure is in the Cause: A Filesystem for Container Debloating [3.072029094326428]
Over 50% of the top-downloaded containers have more than 60% bloat, and BAFFS reduces container sizes significantly.
For serverless functions, BAFFS reduces cold start latency by up to 68%.
arXiv Detail & Related papers (2023-05-08T11:41:30Z) - The Resource Problem of Using Linear Layer Leakage Attack in Federated
Learning [18.34693758013391]
We show that sparsity can decrease the model size overhead by over 327$times$ and the computation time by 3.34$times$ compared to SOTA.
We show that the use of sparsity can decrease the model size overhead by over 327$times$ and the computation time by 3.34$times$ compared to SOTA.
arXiv Detail & Related papers (2023-03-27T01:21:31Z) - Gradient-Free Structured Pruning with Unlabeled Data [57.999191898036706]
We propose a gradient-free structured pruning framework that uses only unlabeled data.
Up to 40% of the original FLOP count can be reduced with less than a 4% accuracy loss across all tasks considered.
arXiv Detail & Related papers (2023-03-07T19:12:31Z) - Petals: Collaborative Inference and Fine-tuning of Large Models [78.37798144357977]
Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters.
With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale.
We propose Petals $-$ a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties.
arXiv Detail & Related papers (2022-09-02T17:38:03Z) - Federated Split BERT for Heterogeneous Text Classification [25.388324221293203]
We propose a framework, FedSplitBERT, which handles heterogeneous data and decreases the communication cost by splitting the BERT encoder layers into local part and global part.
Our framework is ready-to-use and compatible to many existing federated learning algorithms, including FedAvg, FedProx and FedAdam.
arXiv Detail & Related papers (2022-05-26T12:21:57Z) - LBCF: A Large-Scale Budget-Constrained Causal Forest Algorithm [11.82503645248441]
How to select the right amount of incentives (i.e. treatment) to each user under budget constraints is an important research problem.
We propose a novel tree-based treatment selection technique under budget constraints, called Large-Scale Budget-Constrained Causal Forest (LBCF) algorithm.
We deploy our approach in a real-world scenario on a large-scale video platform, where the platform gives away bonuses in order to increase users' campaign engagement duration.
arXiv Detail & Related papers (2022-01-29T13:21:07Z) - Bayesian Active Summarization [3.1423034006764965]
We introduce Bayesian Active Summarization (BAS) as a method of combining active learning methods with state-of-the-art summarization models.
Our findings suggest that BAS achieves better and more robust performance, compared to random selection.
arXiv Detail & Related papers (2021-10-09T06:51:16Z) - LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial
Attack [74.5144793386864]
LSDAT crafts perturbations in the low-dimensional subspace formed by the sparse component of the input sample and that of an adversarial sample.
LSD works directly in the image pixel domain to guarantee that non-$ell$ constraints, such as sparsity, are satisfied.
arXiv Detail & Related papers (2021-03-19T13:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.