Training Image Derivatives: Increased Accuracy and Universal Robustness
- URL: http://arxiv.org/abs/2310.14045v2
- Date: Mon, 27 Nov 2023 19:43:36 GMT
- Title: Training Image Derivatives: Increased Accuracy and Universal Robustness
- Authors: Vsevolod I. Avrutskiy
- Abstract summary: Derivative training is a known method that significantly improves the accuracy of neural networks in some low-dimensional applications.
In this paper, a similar improvement is obtained for an image analysis problem: reconstructing the vertices of a cube from its image.
The derivatives also offer insight into the robustness problem, which is currently understood in terms of two types of network vulnerabilities.
- Score: 3.9160947065896803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Derivative training is a known method that significantly improves the
accuracy of neural networks in some low-dimensional applications. In this
paper, a similar improvement is obtained for an image analysis problem:
reconstructing the vertices of a cube from its image. By training the
derivatives with respect to the 6 degrees of freedom of the cube, we obtain 25
times more accurate results for noiseless inputs. The derivatives also offer
insight into the robustness problem, which is currently understood in terms of
two types of network vulnerabilities. The first type involves small
perturbations that dramatically change the output, and the second type relates
to substantial image changes that the network erroneously ignores. Defense
against each is possible, but safeguarding against both while maintaining the
accuracy defies conventional training methods. The first type is analyzed using
the network's gradient, while the second relies on human input evaluation,
serving as an oracle substitute. For the task at hand, the nearest neighbor
oracle can be defined and expanded into Taylor series using image derivatives.
This allows for a robustness analysis that unifies both types of
vulnerabilities and enables training where accuracy and universal robustness
are limited only by network capacity.
Related papers
- WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - Ambiguity in solving imaging inverse problems with deep learning based
operators [0.0]
Large convolutional neural networks have been widely used as tools for image deblurring.
Image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data.
In this paper, we propose some strategies to improve stability without losing to much accuracy to deblur images with deep-learning based methods.
arXiv Detail & Related papers (2023-05-31T12:07:08Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - THAT: Two Head Adversarial Training for Improving Robustness at Scale [126.06873298511425]
We propose Two Head Adversarial Training (THAT), a two-stream adversarial learning network that is designed to handle the large-scale many-class ImageNet dataset.
The proposed method trains a network with two heads and two loss functions; one to minimize feature-space domain shift between natural and adversarial images, and one to promote high classification accuracy.
arXiv Detail & Related papers (2021-03-25T05:32:38Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - NeuroDiff: Scalable Differential Verification of Neural Networks using
Fine-Grained Approximation [18.653663583989122]
NeuroDiff is a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification.
Our results show that NeuroDiff is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
arXiv Detail & Related papers (2020-09-21T15:00:25Z) - Second Order Optimization for Adversarial Robustness and
Interpretability [6.700873164609009]
We propose a novel regularizer which incorporates first and second order information via a quadratic approximation to the adversarial loss.
It is shown that using only a single iteration in our regularizer achieves stronger robustness than prior gradient and curvature regularization schemes.
It retains the interesting facet of AT that networks learn features which are well-aligned with human perception.
arXiv Detail & Related papers (2020-09-10T15:05:14Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z) - Semantic Robustness of Models of Source Code [44.08472936613909]
Deep neural networks are vulnerable to adversarial examples - small input perturbations that result in incorrect predictions.
We show how to perform adversarial training to learn models robust to such adversaries.
arXiv Detail & Related papers (2020-02-07T23:26:17Z) - ReluDiff: Differential Verification of Deep Neural Networks [8.601847909798165]
We develop a new method for differential verification of two closely related networks.
We exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks.
Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup.
arXiv Detail & Related papers (2020-01-10T20:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.