Homography Estimation with Convolutional Neural Networks Under
Conditions of Variance
- URL: http://arxiv.org/abs/2010.01041v2
- Date: Thu, 22 Oct 2020 16:05:37 GMT
- Title: Homography Estimation with Convolutional Neural Networks Under
Conditions of Variance
- Authors: David Niblick, Avinash Kak
- Abstract summary: We analyze the performance of two recently published methods using Convolutional Neural Networks (CNNs)
CNNs can be trained to be more robust against noise, but at a small cost to accuracy in the noiseless case.
We show that training a CNN to a specific magnitude of noise leads to a "Goldilocks Zone" with regard to the noise levels where that CNN performs best.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Planar homography estimation is foundational to many computer vision
problems, such as Simultaneous Localization and Mapping (SLAM) and Augmented
Reality (AR). However, conditions of high variance confound even the
state-of-the-art algorithms. In this report, we analyze the performance of two
recently published methods using Convolutional Neural Networks (CNNs) that are
meant to replace the more traditional feature-matching based approaches to the
estimation of homography. Our evaluation of the CNN based methods focuses
particularly on measuring the performance under conditions of significant
noise, illumination shift, and occlusion. We also measure the benefits of
training CNNs to varying degrees of noise. Additionally, we compare the effect
of using color images instead of grayscale images for inputs to CNNs. Finally,
we compare the results against baseline feature-matching based homography
estimation methods using SIFT, SURF, and ORB. We find that CNNs can be trained
to be more robust against noise, but at a small cost to accuracy in the
noiseless case. Additionally, CNNs perform significantly better in conditions
of extreme variance than their feature-matching based counterparts. With regard
to color inputs, we conclude that with no change in the CNN architecture to
take advantage of the additional information in the color planes, the
difference in performance using color inputs or grayscale inputs is negligible.
About the CNNs trained with noise-corrupted inputs, we show that training a CNN
to a specific magnitude of noise leads to a "Goldilocks Zone" with regard to
the noise levels where that CNN performs best.
Related papers
- Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - A study on the deviations in performance of FNNs and CNNs in the realm
of grayscale adversarial images [0.3437656066916039]
We show that Neural Networks are prone to having lesser accuracy in the classification of images with noise perturbation.
In our study, we have used the hand-written digits dataset, MNIST with the following architectures: FNNs with 1 and 2 hidden layers and CNNs with 3, 4, 6 and 8 convolutions and analyzed their accuracies.
FNNs stand out to show that irrespective of the intensity of noise, they have a classification accuracy of more than 85%.
arXiv Detail & Related papers (2022-09-17T06:25:14Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Deep Universal Blind Image Denoising [26.77629755630694]
Deep convolutional neural networks (CNNs) have shown great success in image denoising by incorporating large-scale synthetic datasets.
We present a CNN-based method that leverages the advantages of both methods based on the Bayesian perspective.
arXiv Detail & Related papers (2021-01-18T11:49:21Z) - Color Channel Perturbation Attacks for Fooling Convolutional Neural
Networks and A Defense Against Such Attacks [16.431689066281265]
The Conalvolutional Neural Networks (CNNs) have emerged as a powerful data dependent hierarchical feature extraction method.
It is observed that the network overfits the training samples very easily.
We propose a Color Channel Perturbation (CCP) attack to fool the CNNs.
arXiv Detail & Related papers (2020-12-20T11:35:29Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z) - Shape Defense Against Adversarial Attacks [47.64219291655723]
Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture.
Here, we explore how shape bias can be incorporated into CNNs to improve their robustness.
Two algorithms are proposed, based on the observation that edges are invariant to moderate imperceptible perturbations.
arXiv Detail & Related papers (2020-08-31T03:23:59Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.