Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks
- URL: http://arxiv.org/abs/2506.05434v2
- Date: Tue, 10 Jun 2025 07:49:58 GMT
- Title: Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks
- Authors: Thomas Massena, Léo andéol, Thibaut Boissin, Franck Mamalet, Corentin Friedrich, Mathieu Serrurier, Sébastien Gerchinovitz,
- Abstract summary: Conformal Prediction (CP) has proven to be an effective method for improving the trustworthiness of neural networks.<n>We propose a new method that leverages Lipschitz-bounded networks to precisely and efficiently estimate robust CP sets.<n>Our lip-rcp method makes this second approach as efficient as vanilla CP while also allowing robustness guarantees.
- Score: 7.428082880875367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformal Prediction (CP) has proven to be an effective post-hoc method for improving the trustworthiness of neural networks by providing prediction sets with finite-sample guarantees. However, under adversarial attacks, classical conformal guarantees do not hold anymore: this problem is addressed in the field of Robust Conformal Prediction. Several methods have been proposed to provide robust CP sets with guarantees under adversarial perturbations, but, for large scale problems, these sets are either too large or the methods are too computationally demanding to be deployed in real life scenarios. In this work, we propose a new method that leverages Lipschitz-bounded networks to precisely and efficiently estimate robust CP sets. When combined with a 1-Lipschitz robust network, we demonstrate that our lip-rcp method outperforms state-of-the-art results in both the size of the robust CP sets and computational efficiency in medium and large-scale scenarios such as ImageNet. Taking a different angle, we also study vanilla CP under attack, and derive new worst-case coverage bounds of vanilla CP sets, which are valid simultaneously for all adversarial attack levels. Our lip-rcp method makes this second approach as efficient as vanilla CP while also allowing robustness guarantees.
Related papers
- One Sample is Enough to Make Conformal Prediction Robust [53.78604391939934]
We show that conformal prediction attains some robustness even with a forward pass on a single randomly perturbed input.<n>Our approach returns robust sets with smaller average set size compared to SOTA methods which use many (e.g. around 100) passes per input.
arXiv Detail & Related papers (2025-06-19T19:14:25Z) - Robust Conformal Prediction with a Single Binary Certificate [58.450154976190795]
Conformal prediction (CP) converts any model's output to prediction sets with a guarantee to cover the true label with (adjustable) high probability.<n>We propose a robust conformal prediction that produces smaller sets even with significantly lower MC samples.
arXiv Detail & Related papers (2025-03-07T08:41:53Z) - BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks [62.897993591443594]
Data poisoning attacks pose one of the biggest threats to modern AI systems.<n>Data poisoning attacks pose one of the biggest threats to modern AI systems.<n>Data poisoning attacks pose one of the biggest threats to modern AI systems.
arXiv Detail & Related papers (2024-12-13T14:56:39Z) - Verifiably Robust Conformal Prediction [1.391198481393699]
This paper introduces VRCP (Verifiably Robust Conformal Prediction), a new framework that leverages neural network verification methods to recover coverage guarantees under adversarial attacks.
Our method is the first to support perturbations bounded by arbitrary norms including $ell1$, $ell2$, and $ellinfty$, as well as regression tasks.
In every case, VRCP achieves above nominal coverage and yields significantly more efficient and informative prediction regions than the SotA.
arXiv Detail & Related papers (2024-05-29T09:50:43Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Provably Robust Conformal Prediction with Improved Efficiency [29.70455766394585]
Conformal prediction is a powerful tool to generate uncertainty sets with guaranteed coverage.
adversarial examples are able to manipulate conformal methods to construct prediction sets with invalid coverage rates.
We propose two novel methods, Post-Training Transformation (PTT) and Robust Conformal Training (RCT), to effectively reduce prediction set size with little overhead.
arXiv Detail & Related papers (2024-04-30T15:49:01Z) - Probabilistically robust conformal prediction [9.401004747930974]
Conformal prediction (CP) is a framework to quantify uncertainty of machine learning classifiers including deep neural networks.
Almost all the existing work on CP assumes clean testing data and there is not much known about the robustness of CP algorithms.
This paper studies the problem of probabilistically robust conformal prediction (PRCP) which ensures robustness to most perturbations.
arXiv Detail & Related papers (2023-07-31T01:32:06Z) - Approximating Full Conformal Prediction at Scale via Influence Functions [30.391742057634264]
Conformal prediction (CP) is a wrapper around traditional machine learning models.
In this paper, we use influence functions to efficiently approximate full CP.
arXiv Detail & Related papers (2022-02-02T22:38:40Z) - Learning Optimal Conformal Classifiers [32.68483191509137]
Conformal prediction (CP) is used to predict confidence sets containing the true class with a user-specified probability.
This paper explores strategies to differentiate through CP during training with the goal of training model with the conformal wrapper end-to-end.
We show that conformal training (ConfTr) outperforms state-of-the-art CP methods for classification by reducing the average confidence set size.
arXiv Detail & Related papers (2021-10-18T11:25:33Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - Selective Classification via One-Sided Prediction [54.05407231648068]
One-sided prediction (OSP) based relaxation yields an SC scheme that attains near-optimal coverage in the practically relevant high target accuracy regime.
We theoretically derive bounds generalization for SC and OSP, and empirically we show that our scheme strongly outperforms state of the art methods in coverage at small error levels.
arXiv Detail & Related papers (2020-10-15T16:14:27Z) - Second-Order Provable Defenses against Adversarial Attacks [63.34032156196848]
We show that if the eigenvalues of the network are bounded, we can compute a certificate in the $l$ norm efficiently using convex optimization.
We achieve certified accuracy of 5.78%, and 44.96%, and 43.19% on 2,59% and 4BP-based methods respectively.
arXiv Detail & Related papers (2020-06-01T05:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.