Towards Large Certified Radius in Randomized Smoothing using
Quasiconcave Optimization
- URL: http://arxiv.org/abs/2302.00209v2
- Date: Wed, 27 Dec 2023 08:09:45 GMT
- Title: Towards Large Certified Radius in Randomized Smoothing using
Quasiconcave Optimization
- Authors: Bo-Han Kung and Shang-Tse Chen
- Abstract summary: In this work, we show that by exploiting a quasi fixed problem structure, we can find the optimal certified radii for most data points with slight computational overhead.
This leads to an efficient and effective input-specific randomized smoothing algorithm.
- Score: 3.5133481941064164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized smoothing is currently the state-of-the-art method that provides
certified robustness for deep neural networks. However, due to its excessively
conservative nature, this method of incomplete verification often cannot
achieve an adequate certified radius on real-world datasets. One way to obtain
a larger certified radius is to use an input-specific algorithm instead of
using a fixed Gaussian filter for all data points. Several methods based on
this idea have been proposed, but they either suffer from high computational
costs or gain marginal improvement in certified radius. In this work, we show
that by exploiting the quasiconvex problem structure, we can find the optimal
certified radii for most data points with slight computational overhead. This
observation leads to an efficient and effective input-specific randomized
smoothing algorithm. We conduct extensive experiments and empirical analysis on
CIFAR-10 and ImageNet. The results show that the proposed method significantly
enhances the certified radii with low computational overhead.
Related papers
- Estimating the Robustness Radius for Randomized Smoothing with 100$\times$ Sample Efficiency [6.199300239433395]
This work demonstrates that reducing the number of samples by one or two orders of magnitude can still enable the computation of a slightly smaller robustness radius.
We provide the mathematical foundation for explaining the phenomenon while experimentally showing promising results on the standard CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-04-26T12:43:19Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Smooth-Reduce: Leveraging Patches for Improved Certified Robustness [100.28947222215463]
We propose a training-free, modified smoothing approach, Smooth-Reduce.
Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input.
We provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods.
arXiv Detail & Related papers (2022-05-12T15:26:20Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Efficient Nonlinear RX Anomaly Detectors [7.762712532657168]
We propose two families of techniques to improve the efficiency of the standard kernel Reed-Xiaoli (RX) method for anomaly detection.
We show that the proposed efficient methods have a lower computational cost and they perform similar (or outperform) the standard kernel RX algorithm.
arXiv Detail & Related papers (2020-12-07T21:57:54Z) - Higher-Order Certification for Randomized Smoothing [78.00394805536317]
We propose a framework to improve the certified safety region for smoothed classifiers.
We provide a method to calculate the certified safety region using $0th$-order and $1st$-order information.
We also provide a framework that generalizes the calculation for certification using higher-order information.
arXiv Detail & Related papers (2020-10-13T19:35:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.