OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness
- URL: http://arxiv.org/abs/2410.02777v1
- Date: Tue, 17 Sep 2024 16:00:35 GMT
- Title: OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness
- Authors: Olive Franzese, Ali Shahin Shamsabadi, Hamed Haddadi,
- Abstract summary: Zero-Knowledge Proofs of Fairness address fairness noncompliance by allowing a service provider to verify that their model serves diverse demographics equitably.
We present OATH, a framework that is deployably efficient with client-facing communication and an offline audit phase.
OATH provides a 1343x improvement to runtime over previous work for neural network ZKPoF, and scales up to much larger models.
- Score: 13.986886689256128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Though there is much interest in fair AI systems, the problem of fairness noncompliance -- which concerns whether fair models are used in practice -- has received lesser attention. Zero-Knowledge Proofs of Fairness (ZKPoF) address fairness noncompliance by allowing a service provider to verify to external parties that their model serves diverse demographics equitably, with guaranteed confidentiality over proprietary model parameters and data. They have great potential for building public trust and effective AI regulation, but no previous techniques for ZKPoF are fit for real-world deployment. We present OATH, the first ZKPoF framework that is (i) deployably efficient with client-facing communication comparable to in-the-clear ML as a Service query answering, and an offline audit phase that verifies an asymptotically constant quantity of answered queries, (ii) deployably flexible with modularity for any score-based classifier given a zero-knowledge proof of correct inference, (iii) deployably secure with an end-to-end security model that guarantees confidentiality and fairness across training, inference, and audits. We show that OATH obtains strong robustness against malicious adversaries at concretely efficient parameter settings. Notably, OATH provides a 1343x improvement to runtime over previous work for neural network ZKPoF, and scales up to much larger models -- even DNNs with tens of millions of parameters.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - FedCert: Federated Accuracy Certification [8.34167718121698]
Federated Learning (FL) has emerged as a powerful paradigm for training machine learning models in a decentralized manner.
Previous studies have assessed the effectiveness of models in centralized training based on certified accuracy.
This study proposes a method named FedCert to take the first step toward evaluating the robustness of FL systems.
arXiv Detail & Related papers (2024-10-04T01:19:09Z) - Certifiably Byzantine-Robust Federated Conformal Prediction [49.23374238798428]
We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
arXiv Detail & Related papers (2024-06-04T04:43:30Z) - Secure and Verifiable Data Collaboration with Low-Cost Zero-Knowledge
Proofs [30.260427020479536]
In this paper, we propose a novel and highly efficient solution RiseFL for secure and verifiable data collaboration.
Firstly, we devise a probabilistic integrity check method that significantly reduces the cost of ZKP generation and verification.
Thirdly, we design a hybrid commitment scheme to satisfy Byzantine robustness with improved performance.
arXiv Detail & Related papers (2023-11-26T14:19:46Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - FedVal: Different good or different bad in federated learning [9.558549875692808]
Federated learning (FL) systems are susceptible to attacks from malicious actors.
FL poses new challenges in addressing group bias, such as ensuring fair performance for different demographic groups.
Traditional methods used to address such biases require centralized access to the data, which FL systems do not have.
We present a novel approach FedVal for both robustness and fairness that does not require any additional information from clients.
arXiv Detail & Related papers (2023-06-06T22:11:13Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - Confidence-Calibrated Face and Kinship Verification [8.570969129199467]
We introduce an effective confidence measure that allows verification models to convert a similarity score into a confidence score for any given face pair.
We also propose a confidence-calibrated approach, termed Angular Scaling (ASC), which is easy to implement and can be readily applied to existing verification models.
To the best of our knowledge, our work presents the first comprehensive confidence-calibrated solution for modern face and kinship verification tasks.
arXiv Detail & Related papers (2022-10-25T10:43:46Z) - Adversarial Training with Rectified Rejection [114.83821848791206]
We propose to use true confidence (T-Con) as a certainty oracle, and learn to predict T-Con by rectifying confidence.
We prove that under mild conditions, a rectified confidence (R-Con) rejector and a confidence rejector can be coupled to distinguish any wrongly classified input from correctly classified ones.
arXiv Detail & Related papers (2021-05-31T08:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.