Privacy Preserving Recalibration under Domain Shift
- URL: http://arxiv.org/abs/2008.09643v1
- Date: Fri, 21 Aug 2020 18:43:37 GMT
- Title: Privacy Preserving Recalibration under Domain Shift
- Authors: Rachel Luo, Shengjia Zhao, Jiaming Song, Jonathan Kuck, Stefano Ermon,
Silvio Savarese
- Abstract summary: We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints.
We also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets.
- Score: 119.21243107946555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classifiers deployed in high-stakes real-world applications must output
calibrated confidence scores, i.e. their predicted probabilities should reflect
empirical frequencies. Recalibration algorithms can greatly improve a model's
probability estimates; however, existing algorithms are not applicable in
real-world situations where the test data follows a different distribution from
the training data, and privacy preservation is paramount (e.g. protecting
patient records). We introduce a framework that abstracts out the properties of
recalibration problems under differential privacy constraints. This framework
allows us to adapt existing recalibration algorithms to satisfy differential
privacy while remaining effective for domain-shift situations. Guided by our
framework, we also design a novel recalibration algorithm, accuracy temperature
scaling, that outperforms prior work on private datasets. In an extensive
empirical study, we find that our algorithm improves calibration on
domain-shift benchmarks under the constraints of differential privacy. On the
15 highest severity perturbations of the ImageNet-C dataset, our method
achieves a median ECE of 0.029, over 2x better than the next best recalibration
method and almost 5x better than without recalibration.
Related papers
- Improving Predictor Reliability with Selective Recalibration [15.319277333431318]
Recalibration is one of the most effective ways to produce reliable confidence estimates with a pre-trained model.
We propose textitselective recalibration, where a selection model learns to reject some user-chosen proportion of the data.
Our results show that selective recalibration consistently leads to significantly lower calibration error than a wide range of selection and recalibration baselines.
arXiv Detail & Related papers (2024-10-07T18:17:31Z) - CURATE: Scaling-up Differentially Private Causal Graph Discovery [8.471466670802817]
Differential Privacy (DP) has been adopted to ensure user privacy in Causal Graph Discovery (CGD)
We present CURATE, a DP-CGD framework with adaptive privacy budgeting.
We show that CURATE achieves higher utility compared to existing DP-CGD algorithms with less privacy-leakage.
arXiv Detail & Related papers (2024-09-27T18:00:38Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Private Domain Adaptation from a Public Source [48.83724068578305]
We design differentially private discrepancy-based algorithms for adaptation from a source domain with public labeled data to a target domain with unlabeled private data.
Our solutions are based on private variants of Frank-Wolfe and Mirror-Descent algorithms.
arXiv Detail & Related papers (2022-08-12T06:52:55Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Learning Prediction Intervals for Regression: Generalization and
Calibration [12.576284277353606]
We study the generation of prediction intervals in regression for uncertainty quantification.
We use a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes.
We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.
arXiv Detail & Related papers (2021-02-26T17:55:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.