Governing AI Forgetting: Auditing for Machine Unlearning Compliance
- URL: http://arxiv.org/abs/2602.14553v1
- Date: Mon, 16 Feb 2026 08:32:09 GMT
- Title: Governing AI Forgetting: Auditing for Machine Unlearning Compliance
- Authors: Qinqi Lin, Ningning Ding, Lingjie Duan, Jianwei Huang,
- Abstract summary: We introduce the first economic framework for auditing machine unlearning compliance, by integrating certified unlearning theory with regulatory enforcement.<n>We first characterize MU's inherent verification uncertainty using a hypothesis-testing interpretation of certified unlearning.<n>We then propose a game-theoretic model to capture the strategic interactions between the auditor and the operator.
- Score: 30.173215964349044
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Despite legal mandates for the right to be forgotten, AI operators routinely fail to comply with data deletion requests. While machine unlearning (MU) provides a technical solution to remove personal data's influence from trained models, ensuring compliance remains challenging due to the fundamental gap between MU's technical feasibility and regulatory implementation. In this paper, we introduce the first economic framework for auditing MU compliance, by integrating certified unlearning theory with regulatory enforcement. We first characterize MU's inherent verification uncertainty using a hypothesis-testing interpretation of certified unlearning to derive the auditor's detection capability, and then propose a game-theoretic model to capture the strategic interactions between the auditor and the operator. A key technical challenge arises from MU-specific nonlinearities inherent in the model utility and the detection probability, which create complex strategic couplings that traditional auditing frameworks do not address and that also preclude closed-form solutions. We address this by transforming the complex bivariate nonlinear fixed-point problem into a tractable univariate auxiliary problem, enabling us to decouple the system and establish the equilibrium existence, uniqueness, and structural properties without relying on explicit solutions. Counterintuitively, our analysis reveals that the auditor can optimally reduce the inspection intensity as deletion requests increase, since the operator's weakened unlearning makes non-compliance easier to detect. This is consistent with recent auditing reductions in China despite growing deletion requests. Moreover, we prove that although undisclosed auditing offers informational advantages for the auditor, it paradoxically reduces the regulatory cost-effectiveness relative to disclosed auditing.
Related papers
- Beyond the Checkbox: Strengthening DSA Compliance Through Social Media Algorithmic Auditing [7.686169912623941]
Algorithmic platforms are required to comply with obligations concerning algorithmic transparency, user protection and privacy.<n>Little is known about current auditing practices and their effectiveness in ensuring such compliance.<n>Our analysis shows significant inconsistencies in methodologies and lack of technical depth when evaluating AI-powered systems.<n>We propose to employ algorithmic auditing to enhance the depth, scale, and independence of compliance assessments.
arXiv Detail & Related papers (2026-01-26T12:00:29Z) - CoT-Seg: Rethinking Segmentation with Chain-of-Thought Reasoning and Self-Correction [50.67483317563736]
This paper aims to explore a system that can think step-by-step, look up information if needed, generate results, self-evaluate its own results, and refine the results.<n>We introduce CoT-Seg, a training-free framework that rethinks reasoning segmentation by combining chain-of-thought reasoning with self-correction.
arXiv Detail & Related papers (2026-01-24T11:41:54Z) - Preventing the Collapse of Peer Review Requires Verification-First AI [49.995126139461085]
We propose truth-coupling, i.e. how tightly venue scores track latent scientific truth.<n>We formalize two forces that drive a phase transition toward proxy-sovereign evaluation.
arXiv Detail & Related papers (2026-01-23T17:17:32Z) - Reliable LLM-Based Edge-Cloud-Expert Cascades for Telecom Knowledge Systems [54.916243942641444]
Large language models (LLMs) are emerging as key enablers of automation in domains such as telecommunications.<n>We study an edge-cloud-expert cascaded LLM-based knowledge system that supports decision-making through a question-and-answer pipeline.
arXiv Detail & Related papers (2025-12-23T03:10:09Z) - "Show Me You Comply... Without Showing Me Anything": Zero-Knowledge Software Auditing for AI-Enabled Systems [2.2981698355892686]
This paper introduces ZKMLOps, a novel MLOps verification framework.<n>It operationalizes Zero-Knowledge Proofs (ZKPs)-cryptographic protocols allowing a prover to convince a verifier that a statement is true.<n>We evaluate the framework's practicality through a study of regulatory compliance in financial risk auditing.
arXiv Detail & Related papers (2025-10-30T15:03:32Z) - LLMs as verification oracles for Solidity [1.3887048755037537]
This paper provides the first systematic evaluation of GPT-5, a state-of-the-art reasoning LLM, in this role.<n>We benchmark its performance on a large dataset of verification tasks, compare its outputs against those of established formal verification tools, and assess its practical effectiveness in real-world auditing scenarios.<n>Our study suggests a new frontier in the convergence of AI and formal methods for secure smart contract development and auditing.
arXiv Detail & Related papers (2025-09-23T15:32:13Z) - Do We Need to Verify Step by Step? Rethinking Process Supervision from a Theoretical Perspective [59.61868506896214]
We show that under standard data coverage assumptions, reinforcement learning is no more statistically difficult than through process supervision.<n>We prove that any policy's advantage function can serve as an optimal process reward model.
arXiv Detail & Related papers (2025-02-14T22:21:56Z) - From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv Detail & Related papers (2024-10-07T06:15:46Z) - RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
by enhancing SHapley Additive exPlanations [1.3333957453318743]
We propose (RESHAPE) which explains the model output on an aggregated attribute-level.
Our results show empirical evidence that RESHAPE results in versatile explanations compared to state-of-the-art baselines.
We envision such attribute-level explanations as a necessary next step in the adoption of unsupervised DL techniques in financial auditing.
arXiv Detail & Related papers (2022-09-19T16:23:43Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.