Secure PAC Learning: Sample-Budget Laws and Quantum Data-Path Admissibility
- URL: http://arxiv.org/abs/2511.02479v1
- Date: Tue, 04 Nov 2025 11:08:02 GMT
- Title: Secure PAC Learning: Sample-Budget Laws and Quantum Data-Path Admissibility
- Authors: Jeongho Bang,
- Abstract summary: We develop a theory of secure learning grounded in the probably-approximately-correct (PAC) viewpoint.<n>We develop an operational framework that links data-path behavior to finite-sample budgets.<n>This is the first complete framework that embeds a security notion and an operational sample-budget law within the PAC learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Security in machine learning is fragile when data are exfiltrated or perturbed, yet existing frameworks rarely connect the definition and analysis of the security to learnability. In this work, we develop a theory of secure learning grounded in the probably-approximately-correct (PAC) viewpoint and develop an operational framework that links data-path behavior to finite-sample budgets. In our formulation, an accuracy-confidence target is evaluated via a run-based sequential test that halts after a prescribed number of consecutive validations, and a closed-form budget bound guarantees the learning success if the data-path channel is admissible; the acceptance must also exceed a primitive random-search baseline. We elevate and complete our secure-learning construction in the context of quantum information -- establishing quantum-secure PAC learning: for prepare-and-measure scenarios, the data-path admissibility is set to be threshold fixed by Holevo information, not a learner-tunable tolerance. Thus, a certified information advantage for the learner directly becomes the learning security -- an effect with no classical analogue. The channel-determined confidence follows naturally and basis sifting is incorporated for practical deployments. This is the first complete framework that simultaneously embeds a security notion and an operational sample-budget law within the PAC learning and anchors the security in quantum information. The resulting blueprint points toward standardized guarantees for the learning security, with clear avenues for PAC-Bayes extensions and for integration with advanced quantum machine learning front ends.
Related papers
- ACU: Analytic Continual Unlearning for Efficient and Exact Forgetting with Privacy Preservation [39.0731790601695]
Continual Unlearning (CU) aims to sequentially forget particular knowledge acquired during the Continual Learning phase.<n>Most existing unlearning methods require access to the retained dataset for re-training or fine-tuning.<n>We propose a novel gradient-free method for CU, named Analytic Continual Unlearning (ACU), for efficient and exact forgetting with historical data privacy preservation.
arXiv Detail & Related papers (2025-05-18T05:28:18Z) - Learning Verifiable Control Policies Using Relaxed Verification [49.81690518952909]
This work proposes to perform verification throughout training to aim for policies whose properties can be evaluated throughout runtime.<n>The approach is to use differentiable reachability analysis and incorporate new components into the loss function.
arXiv Detail & Related papers (2025-04-23T16:54:35Z) - Ensuring superior learning outcomes and data security for authorized learner [0.4166512373146748]
A learner's ability to generate a hypothesis that closely approximates the target function is crucial in machine learning.<n>It is important to ensure the performance of the "authorized" learner by limiting the quality of the training data accessible to eavesdroppers.<n>We provide a theorem to ensure superior learning outcomes exclusively for the authorized learner with quantum label encoding.
arXiv Detail & Related papers (2025-01-01T06:49:00Z) - Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach [32.15663640443728]
The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits.
Existing verification and validation techniques are often inadequate for these new paradigms.
We propose a roadmap to transition from foundational probabilistic testing to a more rigorous approach capable of delivering formal assurance.
arXiv Detail & Related papers (2023-11-13T14:56:14Z) - Learning Control Policies for Stochastic Systems with Reach-avoid
Guarantees [20.045860624444494]
We study the problem of learning controllers for discrete-time non-linear dynamical systems with formal reach-avoid guarantees.
We learn a certificate in the form of a reach-avoid supermartingale (RASM), a novel notion that we introduce in this work.
Our approach solves several important problems -- it can be used to learn a control policy from scratch, to verify a reach-avoid specification for a fixed control policy, or to fine-tune a pre-trained policy.
arXiv Detail & Related papers (2022-10-11T10:02:49Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.