Personalized Privacy Auditing and Optimization at Test Time
- URL: http://arxiv.org/abs/2302.00077v1
- Date: Tue, 31 Jan 2023 20:16:59 GMT
- Title: Personalized Privacy Auditing and Optimization at Test Time
- Authors: Cuong Tran, Ferdinando Fioretto
- Abstract summary: This paper asks whether it is necessary to require emphall input features for a model to return accurate predictions at test time.
Under a personalized setting, each individual may need to release only a small subset of these features without impacting the final decisions.
Evaluation over several learning tasks shows that individuals may be able to report as little as 10% of their information to ensure the same level of accuracy.
- Score: 44.15285550981899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A number of learning models used in consequential domains, such as to assist
in legal, banking, hiring, and healthcare decisions, make use of potentially
sensitive users' information to carry out inference. Further, the complete set
of features is typically required to perform inference. This not only poses
severe privacy risks for the individuals using the learning systems, but also
requires companies and organizations massive human efforts to verify the
correctness of the released information.
This paper asks whether it is necessary to require \emph{all} input features
for a model to return accurate predictions at test time and shows that, under a
personalized setting, each individual may need to release only a small subset
of these features without impacting the final decisions. The paper also
provides an efficient sequential algorithm that chooses which attributes should
be provided by each individual. Evaluation over several learning tasks shows
that individuals may be able to report as little as 10\% of their information
to ensure the same level of accuracy of a model that uses the complete users'
information.
Related papers
- Data Minimization at Inference Time [44.15285550981899]
In domains with high stakes such as law, recruitment, and healthcare, learning models frequently rely on sensitive user data for inference.
This paper asks whether it is necessary to use emphall input features for accurate predictions at inference time.
The paper demonstrates that, in a personalized setting, individuals may only need to disclose a small subset of their features without compromising decision-making accuracy.
arXiv Detail & Related papers (2023-05-27T23:03:41Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - Can Foundation Models Help Us Achieve Perfect Secrecy? [11.073539163281524]
A key promise of machine learning is the ability to assist users with personal tasks.
A gold standard privacy-preserving system will satisfy perfect secrecy.
However, privacy and quality appear to be in tension in existing systems for personal tasks.
arXiv Detail & Related papers (2022-05-27T02:32:26Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.