zkStruDul: Programming zkSNARKs with Structural Duality
- URL: http://arxiv.org/abs/2511.10565v1
- Date: Fri, 14 Nov 2025 01:58:08 GMT
- Title: zkStruDul: Programming zkSNARKs with Structural Duality
- Authors: Rahul Krishnan, Ashley Samuelson, Emily Yao, Ethan Cecchetti,
- Abstract summary: zkStruDul is a language that unifies input transformations and predicate definitions into a single combined abstraction.<n>We provide a source-level semantics and prove its behavior is identical to the projected semantics, allowing straightforward standard reasoning.
- Score: 0.2449909275410287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-Interactive Zero Knowledge (NIZK) proofs, such as zkSNARKS, let one prove knowledge of private data without revealing it or interacting with a verifier. While existing tooling focuses on specifying the predicate to be proven, real-world applications optimize predicate definitions to minimize proof generation overhead, but must correspondingly transform predicate inputs. Implementing these two steps separately duplicates logic that must precisely match to avoid catastrophic security flaws. We address this shortcoming with zkStruDul, a language that unifies input transformations and predicate definitions into a single combined abstraction from which a compiler can project both procedures, eliminating duplicate code and problematic mismatches. zkStruDul provides a high-level abstraction to layer on top of existing NIZK technology and supports important features like recursive proofs. We provide a source-level semantics and prove its behavior is identical to the projected semantics, allowing straightforward standard reasoning.
Related papers
- Misquoted No More: Securely Extracting F* Programs with IO [0.3848364262836075]
Shallow embeddings that use monads to represent effects are popular in proof-oriented languages.<n>Once shallowly embedded programs are verified, they are often extracted to mainstream languages like OCaml or C.<n>We build on this idea, but limit the use of translation validation to a first extraction step.
arXiv Detail & Related papers (2026-02-23T15:37:18Z) - Lookahead-then-Verify: Reliable Constrained Decoding for Diffusion LLMs under Context-Free Grammars [17.13122301190815]
We present LAVE, a constrained decoding approach specifically designed for dLLMs.<n>Our approach leverages a key property of dLLMs, namely their ability to predict token distributions for all positions in parallel during each forward pass.<n>Extensive experiments across four widely used dLLMs and three representative benchmarks demonstrate that LAVE consistently outperforms existing baselines and achieves substantial improvements in syntactic correctness, while incurring negligible runtime overhead.
arXiv Detail & Related papers (2026-01-31T08:58:15Z) - Multi-Agent Procedural Graph Extraction with Structural and Logical Refinement [66.51979814832332]
model formulates procedural graph extraction as a multi-round reasoning process with dedicated structural and logical refinement.<n>Experiments demonstrate that model achieves substantial improvements in both structural correctness and logical consistency over strong baselines.
arXiv Detail & Related papers (2026-01-27T04:00:48Z) - VIRO: Robust and Efficient Neuro-Symbolic Reasoning with Verification for Referring Expression Comprehension [51.76841625486355]
Referring Expression (REC) aims to localize the image region corresponding to a natural-language query.<n>Recent neuro-symbolic REC approaches leverage large language models (LLMs) and vision-language models (VLMs) to perform compositional reasoning.<n>We introduce VIRO, a neuro-symbolic framework that embeds lightweight operator-level verifiers within reasoning steps.
arXiv Detail & Related papers (2026-01-19T07:21:19Z) - NormCode: A Semi-Formal Language for Context-Isolated AI Planning [7.3226942109207895]
We present NormCode, a semiformal language for constructing plans of inferences.<n>Each step operates in data isolation and receives only explicitly passed inputs, which eliminates crossstep contamination by design.<n>We validate NormCode through two demonstrations: (1) a base X addition algorithm achieving 100 percent accuracy on arbitrary length inputs, and (2) self hosted execution of NormCode's own five phase compiler pipeline.
arXiv Detail & Related papers (2025-12-11T11:50:50Z) - BRIDGE: Building Representations In Domain Guided Program Verification [67.36686119518441]
BRIDGE decomposes verification into three interconnected domains: Code, Specifications, and Proofs.<n>We show that this approach substantially improves both accuracy and efficiency beyond standard error feedback methods.
arXiv Detail & Related papers (2025-11-26T06:39:19Z) - QiMeng-CRUX: Narrowing the Gap between Natural Language and Verilog via Core Refined Understanding eXpression [48.84841760215598]
Large language models (LLMs) have shown promising capabilities in hardware description language (HDL) generation.<n>Existing approaches often rely on free-form natural language descriptions that are often ambiguous, redundant, and unstructured.<n>We treat hardware code generation as a complex transformation from an open-ended natural language space to a domain-specific, highly constrained target space.<n>We introduce Core Refined Understanding eXpression (CRUX), a structured intermediate space that captures the essential semantics of user intent while organizing the expression for precise Verilog code generation.
arXiv Detail & Related papers (2025-11-25T09:17:32Z) - Are Language Models Efficient Reasoners? A Perspective from Logic Programming [109.47572890883248]
Modern language models (LMs) exhibit strong deductive reasoning capabilities, yet standard evaluations emphasize correctness while overlooking a key aspect of human-like reasoning: efficiency.<n>We propose a framework for assessing LM reasoning efficiency through the lens of logic programming.
arXiv Detail & Related papers (2025-10-29T15:30:31Z) - ProofFlow: A Dependency Graph Approach to Faithful Proof Autoformalization [10.021930888250546]
Current approaches fail to preserve semantic meaning and logical structure of the original human-written argument.<n>We introduce ProofFlow, a novel pipeline that treats structural fidelity as a primary objective.<n>We present a new benchmark of 184 undergraduate-level problems, manually annotated with step-by-step solutions.<n>We also introduce ProofScore, a new composite metric to evaluate syntactic correctness, semantic faithfulness, and structural fidelity.
arXiv Detail & Related papers (2025-10-13T10:20:11Z) - Truth-Aware Decoding: A Program-Logic Approach to Factual Language Generation [0.2864713389096699]
This paper introduces Truth-Aware Decoding (TAD), a verification-oriented decoding scheme that aligns neural language generation with knowledge bases.<n>Our contributions are fourfold: (i) a constraint-based semantics that renders oracle filtering as a program-logic judgment, (ii) a proof that greedy selection enjoys local likelihood dominance under sound and complete guards, and (iii) an entropy-style invariant that quantifies factual risk via knowledge-aware safe mass.
arXiv Detail & Related papers (2025-10-03T22:11:15Z) - SLICET5: Static Program Slicing using Language Models with Copy Mechanism and Constrained Decoding [13.61350801915956]
Static program slicing is a fundamental technique in software engineering.<n>ourtool is a novel slicing framework that reformulates static program slicing as a sequence-to-sequence task.<n>ourtool consistently outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2025-09-22T03:14:47Z) - ReF Decompile: Relabeling and Function Call Enhanced Decompile [50.86228893636785]
The goal of decompilation is to convert compiled low-level code (e.g., assembly code) back into high-level programming languages.<n>This task supports various reverse engineering applications, such as vulnerability identification, malware analysis, and legacy software migration.
arXiv Detail & Related papers (2025-02-17T12:38:57Z) - Enhancing LLM Character-Level Manipulation via Divide and Conquer [74.55804812450164]
Large Language Models (LLMs) have demonstrated strong generalization capabilities across a wide range of natural language processing (NLP) tasks.<n>They exhibit notable weaknesses in character-level string manipulation, struggling with fundamental operations such as character deletion, insertion, and substitution.<n>We propose Character-Level Manipulation via Divide and Conquer, a novel approach designed to bridge the gap between token-level processing and character-level manipulation.
arXiv Detail & Related papers (2025-02-12T07:37:39Z) - Localizing Factual Inconsistencies in Attributable Text Generation [74.11403803488643]
We introduce QASemConsistency, a new formalism for localizing factual inconsistencies in attributable text generation.<n>We show that QASemConsistency yields factual consistency scores that correlate well with human judgments.
arXiv Detail & Related papers (2024-10-09T22:53:48Z) - SCALE: Constructing Structured Natural Language Comment Trees for Software Vulnerability Detection [36.37244302912536]
We propose a Structured Natural Language Comment tree-based vulnerAbiLity dEtection framework based on the pre-trained models.
The proposed Structured Natural Language Comment Tree (SCT) integrates the semantics of code statements with code execution sequences.
arXiv Detail & Related papers (2024-03-28T02:20:03Z) - Log Probabilities Are a Reliable Estimate of Semantic Plausibility in Base and Instruction-Tuned Language Models [50.15455336684986]
We evaluate the effectiveness of LogProbs and basic prompting to measure semantic plausibility.
We find that LogProbs offers a more reliable measure of semantic plausibility than direct zero-shot prompting.
We conclude that, even in the era of prompt-based evaluations, LogProbs constitute a useful metric of semantic plausibility.
arXiv Detail & Related papers (2024-03-21T22:08:44Z) - SAGA: Summarization-Guided Assert Statement Generation [34.51502565985728]
This paper presents a novel summarization-guided approach for automatically generating assert statements.
We leverage a pre-trained language model as the reference architecture and fine-tune it on the task of assert statement generation.
arXiv Detail & Related papers (2023-05-24T07:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.