TrustGAN: Training safe and trustworthy deep learning models through
generative adversarial networks
- URL: http://arxiv.org/abs/2211.13991v1
- Date: Fri, 25 Nov 2022 09:57:23 GMT
- Title: TrustGAN: Training safe and trustworthy deep learning models through
generative adversarial networks
- Authors: H\'elion du Mas des Bourboux
- Abstract summary: We present TrustGAN, a generative adversarial network pipeline targeting trustness.
The pipeline can accept any given deep learning model which outputs a prediction and a confidence on this prediction.
It is applied here to a target classification model trained on MNIST data to recognise numbers based on images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models have been developed for a variety of tasks and are
deployed every day to work in real conditions. Some of these tasks are critical
and models need to be trusted and safe, e.g. military communications or cancer
diagnosis. These models are given real data, simulated data or combination of
both and are trained to be highly predictive on them. However, gathering enough
real data or simulating them to be representative of all the real conditions
is: costly, sometimes impossible due to confidentiality and most of the time
impossible. Indeed, real conditions are constantly changing and sometimes are
intractable. A solution is to deploy machine learning models that are able to
give predictions when they are confident enough otherwise raise a flag or
abstain. One issue is that standard models easily fail at detecting
out-of-distribution samples where their predictions are unreliable.
We present here TrustGAN, a generative adversarial network pipeline targeting
trustness. It is a deep learning pipeline which improves a target model
estimation of the confidence without impacting its predictive power. The
pipeline can accept any given deep learning model which outputs a prediction
and a confidence on this prediction. Moreover, the pipeline does not need to
modify this target model. It can thus be easily deployed in a MLOps (Machine
Learning Operations) setting.
The pipeline is applied here to a target classification model trained on
MNIST data to recognise numbers based on images. We compare such a model when
trained in the standard way and with TrustGAN. We show that on
out-of-distribution samples, here FashionMNIST and CIFAR10, the estimated
confidence is largely reduced. We observe similar conclusions for a
classification model trained on 1D radio signals from AugMod, tested on
RML2016.04C. We also publicly release the code.
Related papers
- Onboard Out-of-Calibration Detection of Deep Learning Models using Conformal Prediction [4.856998175951948]
We show that conformal prediction algorithms are related to the uncertainty of the deep learning model and that this relation can be used to detect if the deep learning model is out-of-calibration.
An out-of-calibration detection procedure relating the model uncertainty and the average size of the conformal prediction set is presented.
arXiv Detail & Related papers (2024-05-04T11:05:52Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be
Consistent [97.64313409741614]
We propose to enforce a emphconsistency property which states that predictions of the model on its own generated data are consistent across time.
We show that our novel training objective yields state-of-the-art results for conditional and unconditional generation in CIFAR-10 and baseline improvements in AFHQ and FFHQ.
arXiv Detail & Related papers (2023-02-17T18:45:04Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated
Learning [77.27443885999404]
Federated Learning (FL) is a setting for training machine learning models in distributed environments.
We propose a novel method, CANIFE, that uses carefully crafted samples by a strong adversary to evaluate the empirical privacy of a training round.
arXiv Detail & Related papers (2022-10-06T13:30:16Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - How to Learn when Data Gradually Reacts to Your Model [10.074466859579571]
We propose a new algorithm, Stateful Performative Gradient Descent (Stateful PerfGD), for minimizing the performative loss even in the presence of these effects.
Our experiments confirm that Stateful PerfGD substantially outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2021-12-13T22:05:26Z) - Do Not Trust Prediction Scores for Membership Inference Attacks [15.567057178736402]
Membership inference attacks (MIAs) aim to determine whether a specific sample was used to train a predictive model.
We argue that this is a fallacy for many modern deep network architectures.
We are able to produce a potentially infinite number of samples falsely classified as part of the training data.
arXiv Detail & Related papers (2021-11-17T12:39:04Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.