Knowledge is Overrated: A zero-knowledge machine learning and cryptographic hashing-based framework for verifiable, low latency inference at the LHC
- URL: http://arxiv.org/abs/2511.12592v1
- Date: Sun, 16 Nov 2025 13:31:35 GMT
- Title: Knowledge is Overrated: A zero-knowledge machine learning and cryptographic hashing-based framework for verifiable, low latency inference at the LHC
- Authors: Pratik Jawahar, Caterina Doglioni, Maurizio Pierini,
- Abstract summary: Low latency event-selection (trigger) algorithms are essential components of Large Hadron Collider (LHC) operation.<n>Modern machine learning (ML) models have shown great offline performance as classifiers.<n>Inference on such large models does not satisfy the $40textMHz$ online latency constraint at the LHC.
- Score: 0.6825664914747622
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Low latency event-selection (trigger) algorithms are essential components of Large Hadron Collider (LHC) operation. Modern machine learning (ML) models have shown great offline performance as classifiers and could improve trigger performance, thereby improving downstream physics analyses. However, inference on such large models does not satisfy the $40\text{MHz}$ online latency constraint at the LHC. In this work, we propose \texttt{PHAZE}, a novel framework built on cryptographic techniques like hashing and zero-knowledge machine learning (zkML) to achieve low latency inference, via a certifiable, early-exit mechanism from an arbitrarily large baseline model. We lay the foundations for such a framework to achieve nanosecond-order latency and discuss its inherent advantages, such as built-in anomaly detection, within the scope of LHC triggers, as well as its potential to enable a dynamic low-level trigger in the future.
Related papers
- Unlocking Prototype Potential: An Efficient Tuning Framework for Few-Shot Class-Incremental Learning [69.28860905525057]
Few-shot class-incremental learning (FSCIL) seeks to continuously learn new classes from very limited samples.<n>We introduce an efficient prototype fine-tuning framework that evolves static centroids into dynamic, learnable components.
arXiv Detail & Related papers (2026-02-05T03:50:53Z) - Towards Tensor Network Models for Low-Latency Jet Tagging on FPGAs [0.48358268525420206]
We present a systematic study of.<n>Network oflatency-based models.<n>Models achieve competitive performance compared to Field Gate Arrays (FPGAs)<n>Overall, this study highlights the potential of.<n>the TN-based models for low-efficient resource inference in low-latency environments.
arXiv Detail & Related papers (2026-01-15T19:04:49Z) - Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models [57.49136894315871]
New paradigm of test-time scaling has yielded remarkable breakthroughs in reasoning models and generative vision models.<n>We propose one solution to the problem of integrating test-time scaling knowledge into a model during post-training.<n>We replace reward guided test-time noise optimization in diffusion models with a Noise Hypernetwork that modulates initial input noise.
arXiv Detail & Related papers (2025-08-13T17:33:37Z) - Anomaly detection with spiking neural networks for LHC physics [0.294944680995069]
Anomaly detection offers a promising strategy for discovering new physics at the Large Hadron Collider (LHC)<n>This paper investigates AutoEncoders built using neuromorphic Spiking Neural Networks (SNNs) for this purpose.
arXiv Detail & Related papers (2025-07-31T18:00:03Z) - Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free [81.65559031466452]
We conduct experiments to investigate gating-augmented softmax attention variants.<n>We find that a simple modification-applying a head-specific sigmoid gate after the Scaled Dot-Product Attention (SDPA)-consistently improves performance.
arXiv Detail & Related papers (2025-05-10T17:15:49Z) - A New Perspective on Time Series Anomaly Detection: Faster Patch-based Broad Learning System [59.38402187365612]
Time series anomaly detection (TSAD) has been a research hotspot in both academia and industry in recent years.<n>Deep learning is not required for TSAD due to limitations such as slow deep learning speed.<n>We propose Contrastive Patch-based Broad Learning System (CBLS)
arXiv Detail & Related papers (2024-12-07T01:58:18Z) - Quantum Rationale-Aware Graph Contrastive Learning for Jet Discrimination [2.140851466387413]
In high-energy physics, particle jet tagging plays a pivotal role in distinguishing quark from gluon jets.<n>Existing contrastive learning frameworks struggle to leverage rationale-aware augmentations effectively.<n>We show that integrating a quantum rationale generator within our proposed Quantum Rationale-aware Graph Contrastive Learning framework significantly enhances jet discrimination performance.
arXiv Detail & Related papers (2024-11-03T17:36:05Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - On-Device Learning with Binary Neural Networks [2.7040098749051635]
We propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN)
The choice of a binary network as backbone is essential to meet the constraints of low power devices.
arXiv Detail & Related papers (2023-08-29T13:48:35Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Robust Collaborative Learning with Linear Gradient Overhead [7.250306457887471]
Collaborative learning algorithms, such as distributed SGD (or D-SGD), are prone to faulty machines.
We present MoNNA, a new algorithm that is provably robust under standard assumptions.
We present a way to control the tension between the momentum and the model drifts.
arXiv Detail & Related papers (2022-09-22T11:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.