Boosting Verified Training for Robust Image Classifications via
Abstraction
- URL: http://arxiv.org/abs/2303.11552v1
- Date: Tue, 21 Mar 2023 02:38:14 GMT
- Title: Boosting Verified Training for Robust Image Classifications via
Abstraction
- Authors: Zhaodi Zhang, Zhiyi Xue, Yang Chen, Si Liu, Yueling Zhang, Jing Liu,
Min Zhang
- Abstract summary: This paper proposes a novel, abstraction-based, certified training method for robust image classifiers.
By training on intervals, all perturbed images that are mapped to the same interval are classified as the same label.
For the abstraction, our training method also enables a sound and complete black-box verification approach.
- Score: 20.656457368486876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel, abstraction-based, certified training method for
robust image classifiers. Via abstraction, all perturbed images are mapped into
intervals before feeding into neural networks for training. By training on
intervals, all the perturbed images that are mapped to the same interval are
classified as the same label, rendering the variance of training sets to be
small and the loss landscape of the models to be smooth. Consequently, our
approach significantly improves the robustness of trained models. For the
abstraction, our training method also enables a sound and complete black-box
verification approach, which is orthogonal and scalable to arbitrary types of
neural networks regardless of their sizes and architectures. We evaluate our
method on a wide range of benchmarks in different scales. The experimental
results show that our method outperforms state of the art by (i) reducing the
verified errors of trained models up to 95.64%; (ii) totally achieving up to
602.50x speedup; and (iii) scaling up to larger models with up to 138 million
trainable parameters. The demo is available at
https://github.com/zhangzhaodi233/ABSCERT.git.
Related papers
- Data Attribution for Text-to-Image Models by Unlearning Synthesized Images [71.23012718682634]
The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image.
We propose a new approach that efficiently identifies highly-influential images.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - Pre-Trained Vision-Language Models as Partial Annotators [40.89255396643592]
Pre-trained vision-language models learn massive data to model unified representations of images and natural languages.
In this paper, we investigate a novel "pre-trained annotating - weakly-supervised learning" paradigm for pre-trained model application and experiment on image classification tasks.
arXiv Detail & Related papers (2024-05-23T17:17:27Z) - Efficient Denoising using Score Embedding in Score-based Diffusion Models [0.24578723416255752]
We propose to increase the efficiency in training score-based diffusion models.
We accomplish this by solving the log-density Fokker-Planck (FP) Equation numerically.
The pre-computed score is embedded into the image to encourage faster training under slice Wasserstein distance.
arXiv Detail & Related papers (2024-04-10T00:05:55Z) - FreeSeg-Diff: Training-Free Open-Vocabulary Segmentation with Diffusion Models [56.71672127740099]
We focus on the task of image segmentation, which is traditionally solved by training models on closed-vocabulary datasets.
We leverage different and relatively small-sized, open-source foundation models for zero-shot open-vocabulary segmentation.
Our approach (dubbed FreeSeg-Diff), which does not rely on any training, outperforms many training-based approaches on both Pascal VOC and COCO datasets.
arXiv Detail & Related papers (2024-03-29T10:38:25Z) - Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged
Training [28.17601195439716]
Unlearning techniques generate unlearnable samples by adding imperceptible perturbations to data for public publishing.
We make the in-depth analysis and observe that models can learn both image features and perturbation features of unlearnable samples at an early stage.
We propose Progressive Staged Training to effectively prevent models from overfitting in learning perturbation features.
arXiv Detail & Related papers (2023-06-03T09:36:16Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Scaling Laws For Deep Learning Based Image Reconstruction [26.808569077500128]
We study whether major performance gains are expected from scaling up the training set size.
An initially steep power-law scaling slows significantly already at moderate training set sizes.
We analytically characterize the performance of a linear estimator learned with early stopped gradient descent.
arXiv Detail & Related papers (2022-09-27T14:44:57Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Fidelity Estimation Improves Noisy-Image Classification with Pretrained
Networks [12.814135905559992]
We propose a method that can be applied on a pretrained classifier.
Our method exploits a fidelity map estimate that is fused into the internal representations of the feature extractor.
We show that when using our oracle fidelity map we even outperform the fully retrained methods, whether trained on noisy or restored images.
arXiv Detail & Related papers (2021-06-01T17:58:32Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.