Privacy-Preserving Proof of Human Authorship via Zero-Knowledge Process Attestation
- URL: http://arxiv.org/abs/2603.00179v1
- Date: Thu, 26 Feb 2026 20:38:19 GMT
- Title: Privacy-Preserving Proof of Human Authorship via Zero-Knowledge Process Attestation
- Authors: David Condrey,
- Abstract summary: ZK-PoP is a construction that allows a verifier to confirm that (a) sequential work function chains were computed correctly, (b) behavioral feature vectors fall within human population, and (c) content evolution is consistent with incremental human editing.<n>We prove that ZK-PoP is computationally zero-knowledge, computationally sound, and unlinkability across sessions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Process attestation verifies human authorship by collecting behavioral biometric evidence, including keystroke dynamics, typing patterns, and editing behavior, during the creative process. However, the very data needed to prove authenticity can reveal intimate details about an author's cognitive state, health conditions, and identity, constituting sensitive biometric data under GDPR Article 9. We resolve this privacy-attestation paradox using zero-knowledge proofs. We present ZK-PoP, a construction that allows a verifier to confirm that (a) sequential work function chains were computed correctly, (b) behavioral feature vectors fall within human population distributions, and (c) content evolution is consistent with incremental human editing, all without learning the underlying behavioral data, exact timing, or intermediate content. Our construction uses Groth16 proofs over arithmetic circuits with Pedersen commitments and Bulletproof range proofs. We prove that ZK-PoP is computationally zero-knowledge, computationally sound, and achieves unlinkability across sessions. Evaluation shows proof generation in under 30 seconds for a 1-hour writing session, with 192-byte proofs verifiable in 8.2 ms, while incurring less than 5% accuracy loss in simulation at practical privacy levels (epsilon >= 1.0) compared to non-private baselines.
Related papers
- Detecting Cognitive Signatures in Typing Behavior for Non-Intrusive Authorship Verification [0.0]
The proliferation of AI-generated text has intensified the need for reliable authorship verification, yet current output-based methods are increasingly unreliable.<n>We observe that the ordinary typing interface captures rich cognitive signatures, measurable patterns in keystroke timing that reflect the planning, translating, and revising stages of genuine composition.<n>We present a non-intrusive verification framework that operates within existing writing interfaces, collecting only timing metadata to preserve privacy.
arXiv Detail & Related papers (2026-02-26T20:02:55Z) - Witnessd: Proof-of-process via Adversarial Collapse [0.0]
We address the gap between cryptographic integrity and process provenance.<n>We introduce proof-of-process, a primitive category for evidence that a physical process, not merely a signing key, produced a digital artifact.<n>We present Witnessd, an architecture combining jitter seals with Verifiable Delay Functions, external timestamp anchors, dual-source keystroke validation.
arXiv Detail & Related papers (2026-02-02T05:30:21Z) - A Benchmark for Open-Domain Numerical Fact-Checking Enhanced by Claim Decomposition [7.910984819642885]
QuanTemp++ is a dataset consisting of natural numerical claims, an open domain corpus, with the corresponding relevant evidence for each claim.<n>We characterize the retrieval performance of key claim decomposition paradigms.
arXiv Detail & Related papers (2025-10-24T22:37:13Z) - Proof-Carrying Numbers (PCN): A Protocol for Trustworthy Numeric Answers from LLMs via Claim Verification [0.0]
We propose textbfProof-Carrying Numbers (PCN), a presentation-layer protocol that enforces numeric fidelity through mechanical verification.<n>PCN is lightweight and model-agnostic, integrates seamlessly into existing applications, and can be extended with cryptographic commitments.
arXiv Detail & Related papers (2025-09-08T17:20:16Z) - Reconstructing Trust Embeddings from Siamese Trust Scores: A Direct-Sum Approach with Fixed-Point Semantics [0.0]
We study the inverse problem of reconstructing high-dimensional trust embeddings from the one-dimensional Siamese trust scores that many distributed-security frameworks expose.<n>A suite of synthetic benchmarks confirms that, even in the presence of Gaussian noise, the recovered embeddings preserve inter-device geometry as measured by Euclidean and cosine metrics.<n>The paper demonstrates a practical privacy risk: publishing granular trust scores can leak latent behavioural information about both devices and evaluation models.
arXiv Detail & Related papers (2025-08-02T20:19:22Z) - Benchmarking Fraud Detectors on Private Graph Data [70.4654745317714]
Currently, many types of fraud are managed in part by automated detection algorithms that operate over graphs.<n>We consider the scenario where a data holder wishes to outsource development of fraud detectors to third parties.<n>Third parties submit their fraud detectors to the data holder, who evaluates these algorithms on a private dataset and then publicly communicates the results.<n>We propose a realistic privacy attack on this system that allows an adversary to de-anonymize individuals' data based only on the evaluation results.
arXiv Detail & Related papers (2025-07-30T03:20:15Z) - Privacy-Preserving Biometric Verification with Handwritten Random Digit String [49.77172854374479]
Handwriting verification has stood as a steadfast identity authentication method for decades.<n>However, this technique risks potential privacy breaches due to the inclusion of personal information in handwritten biometrics such as signatures.<n>We propose using the Random Digit String (RDS) for privacy-preserving handwriting verification.
arXiv Detail & Related papers (2025-03-17T03:47:25Z) - Automated Proof Generation for Rust Code via Self-Evolution [69.25795662658356]
We introduce SAFE, a framework that overcomes the lack of human-written snippets to enable automated proof generation of Rust code.<n> SAFE re-purposes the large number of synthesized incorrect proofs to train the self-ging capability of the fine-tuned models.<n>We achieve a 52.52% accuracy rate in a benchmark crafted by human experts, a significant leap over GPT-4o's performance of 14.39%.
arXiv Detail & Related papers (2024-10-21T08:15:45Z) - Generating Natural Language Proofs with Verifier-Guided Search [74.9614610172561]
We present a novel stepwise method NLProofS (Natural Language Proof Search)
NLProofS learns to generate relevant steps conditioning on the hypothesis.
It achieves state-of-the-art performance on EntailmentBank and RuleTaker.
arXiv Detail & Related papers (2022-05-25T02:22:30Z) - Quantum Proofs of Deletion for Learning with Errors [91.3755431537592]
We construct the first fully homomorphic encryption scheme with certified deletion.
Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors distribution in the form of a quantum state was deleted.
arXiv Detail & Related papers (2022-03-03T10:07:32Z) - AmbiFC: Fact-Checking Ambiguous Claims with Evidence [57.7091560922174]
We present AmbiFC, a fact-checking dataset with 10k claims derived from real-world information needs.
We analyze disagreements arising from ambiguity when comparing claims against evidence in AmbiFC.
We develop models for predicting veracity handling this ambiguity via soft labels.
arXiv Detail & Related papers (2021-04-01T17:40:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.