Valida ISA Spec, version 1.0: A zk-Optimized Instruction Set Architecture
- URL: http://arxiv.org/abs/2505.08114v1
- Date: Mon, 12 May 2025 23:03:02 GMT
- Title: Valida ISA Spec, version 1.0: A zk-Optimized Instruction Set Architecture
- Authors: Morgan Thomas, Mamy Ratsimbazafy, Marcin Bugaj, Lewis Revill, Carlo Modica, Sebastian Schmidt, Ventali Tan, Daniel Lubarov, Max Gillett, Wei Dai,
- Abstract summary: The Valida instruction set architecture is designed for implementation in zkVMs to optimize for fast, efficient execution proving.<n>This specification intends to guide implementors of zkVMs and compiler toolchains for Valida.
- Score: 2.0790368408580138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Valida instruction set architecture is designed for implementation in zkVMs to optimize for fast, efficient execution proving. This specification intends to guide implementors of zkVMs and compiler toolchains for Valida. It provides an unambiguous definition of the semantics of Valida programs and may be used as a starting point for formalization efforts.
Related papers
- PerfGuard: A Performance-Aware Agent for Visual Content Generation [53.591105729011595]
PerfGuard is a performance-aware agent framework for visual content generation.<n>It integrates tool performance boundaries into task planning and scheduling.<n>It has advantages in tool selection accuracy, execution reliability, and alignment with user intent.
arXiv Detail & Related papers (2026-01-30T05:12:19Z) - BRIDGE: Building Representations In Domain Guided Program Verification [67.36686119518441]
BRIDGE decomposes verification into three interconnected domains: Code, Specifications, and Proofs.<n>We show that this approach substantially improves both accuracy and efficiency beyond standard error feedback methods.
arXiv Detail & Related papers (2025-11-26T06:39:19Z) - AwareCompiler: Agentic Context-Aware Compiler Optimization via a Synergistic Knowledge-Data Driven Framework [42.57224438231615]
This paper introduces textbfAwareCompiler, an agentic framework for compiler optimization.<n>Three key innovations: structured knowledge integration and dataset construction, knowledge-driven adaptive pass generation, and data-driven hybrid training pipeline.<n> Experimental results on standard benchmarks demonstrate that AwareCompiler significantly outperforms existing baselines in both performance and efficiency.
arXiv Detail & Related papers (2025-10-13T02:02:36Z) - Compiling by Proving: Language-Agnostic Automatic Optimization from Formal Semantics [0.0]
We construct All-Path Reachability Proofs through symbolic execution and compiling their graph structure.<n>We consolidate many semantic rewrites into single rules while preserving correctness by construction.<n>We implement this as a language-agnostic extension to the K framework.
arXiv Detail & Related papers (2025-09-26T02:49:08Z) - Promptomatix: An Automatic Prompt Optimization Framework for Large Language Models [72.4723784999432]
Large Language Models (LLMs) perform best with well-crafted prompts, yet prompt engineering remains manual, inconsistent, and inaccessible to non-experts.<n>Promptomatix transforms natural language task descriptions into high-quality prompts without requiring manual tuning or domain expertise.<n>System analyzes user intent, generates synthetic training data, selects prompting strategies, and refines prompts using cost-aware objectives.
arXiv Detail & Related papers (2025-07-17T18:18:20Z) - Global Microprocessor Correctness in the Presence of Transient Execution [0.16385815610837165]
We advocate for the use of formal specifications, using the theory of refinement.<n>We introduce notions of correctness that can be used to deal with transient execution attacks, including Meltdown and Spectre.
arXiv Detail & Related papers (2025-06-20T16:56:14Z) - RAISE: Reinforenced Adaptive Instruction Selection For Large Language Models [48.63476198469349]
We propose a task-objective-driven instruction selection framework RAISE.<n> RAISE incorporates the entire instruction fine-tuning process into optimization.<n>It selects instruction at each step based on the expected impact of instruction on model performance improvement.
arXiv Detail & Related papers (2025-04-09T21:17:52Z) - Compiler Optimization Testing Based on Optimization-Guided Equivalence Transformations [3.2987550056134873]
We propose a metamorphic testing approach inspired by compiler optimizations.<n>Our approach first employs tailored code construction strategies to generate input programs that satisfy optimization conditions.<n>By comparing the outputs of pre- and post-transformation programs, this approach effectively identifies incorrect optimization bugs.
arXiv Detail & Related papers (2025-04-06T01:37:57Z) - DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs [56.4979142807426]
We introduce underlinetextbfDirect Preference Learning with Only underlinetextbfSelf-Generated underlinetextbfTests and underlinetextbfCode (DSTC)<n>DSTC uses only self-generated code snippets and tests to construct reliable preference pairs.
arXiv Detail & Related papers (2024-11-20T02:03:16Z) - Should AI Optimize Your Code? A Comparative Study of Classical Optimizing Compilers Versus Current Large Language Models [0.0]
Large Language Models (LLMs) raise intriguing questions about the potential of these AI approaches to revolutionize code optimization.<n>This work aims to answer an essential question for the compiler community: "Can AI-driven models revolutionize the way we approach code optimization?"<n>We present a comparative analysis between three classical optimizing compilers and two recent large language models.
arXiv Detail & Related papers (2024-06-17T23:26:41Z) - Contrastive Instruction Tuning [61.97704869248903]
We propose Contrastive Instruction Tuning to maximize the similarity between semantically equivalent instruction-instance pairs.
Experiments on the PromptBench benchmark show that CoIN consistently improves LLMs' robustness to unseen instructions with variations across character, word, sentence, and semantic levels by an average of +2.5% in accuracy.
arXiv Detail & Related papers (2024-02-17T00:09:32Z) - SparseOptimizer: Sparsify Language Models through Moreau-Yosida
Regularization and Accelerate via Compiler Co-design [0.685316573653194]
This paper introduces Sparser, a novel deep learning that exploits Moreau-Yosida regularization to induce sparsity in large language models such as BERT, ALBERT and GPT.
Sparser's plug-and-play functionality eradicates the need for code modifications, making it a universally adaptable tool for a wide array of large language models.
Empirical evaluations on benchmark datasets such as GLUE, RACE, SQuAD1, and SQuAD2 confirm that SBERT and Sparser, when sparsified using Sparser, achieve performance comparable to their dense counterparts
arXiv Detail & Related papers (2023-06-27T17:50:26Z) - Learning to Superoptimize Real-world Programs [79.4140991035247]
We propose a framework to learn to superoptimize real-world programs by using neural sequence-to-sequence models.
We introduce the Big Assembly benchmark, a dataset consisting of over 25K real-world functions mined from open-source projects in x86-64 assembly.
arXiv Detail & Related papers (2021-09-28T05:33:21Z) - Enabling Retargetable Optimizing Compilers for Quantum Accelerators via
a Multi-Level Intermediate Representation [78.8942067357231]
We present a multi-level quantum-classical intermediate representation (IR) that enables an optimizing, retargetable, ahead-of-time compiler.
We support the entire gate-based OpenQASM 3 language and provide custom extensions for common quantum programming patterns and improved syntax.
Our work results in compile times that are 1000x faster than standard Pythonic approaches, and 5-10x faster than comparative standalone quantum language compilers.
arXiv Detail & Related papers (2021-09-01T17:29:47Z) - A Case Study of LLVM-Based Analysis for Optimizing SIMD Code Generation [0.0]
This paper presents a methodology for using LLVM-based tools to tune the DCA++ application that targets the new ARM A64FX processor.
By applying these code changes, codespeed was increased by 1.98X and 78 GFlops were achieved on the A64FX processor.
arXiv Detail & Related papers (2021-06-27T22:38:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.