Cascade Network with Guided Loss and Hybrid Attention for Two-view
Geometry
- URL: http://arxiv.org/abs/2007.05706v2
- Date: Thu, 16 Jul 2020 03:03:22 GMT
- Title: Cascade Network with Guided Loss and Hybrid Attention for Two-view
Geometry
- Authors: Zhi Chen and Fan Yang and Wenbing Tao
- Abstract summary: We propose a Guided Loss to establish the direct negative correlation between the loss and Fn-measure.
We then propose a hybrid attention block to extract feature.
Experiments have shown that our network achieves the state-of-the-art performance on benchmark datasets.
- Score: 32.52184271700281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we are committed to designing a high-performance network for
two-view geometry. We first propose a Guided Loss and theoretically establish
the direct negative correlation between the loss and Fn-measure by dynamically
adjusting the weights of positive and negative classes during training, so that
the network is always trained towards the direction of increasing Fn-measure.
By this way, the network can maintain the advantage of the cross-entropy loss
while maximizing the Fn-measure. We then propose a hybrid attention block to
extract feature, which integrates the bayesian attentive context normalization
(BACN) and channel-wise attention (CA). BACN can mine the prior information to
better exploit global context and CA can capture complex channel context to
enhance the channel awareness of the network. Finally, based on our Guided Loss
and hybrid attention block, a cascade network is designed to gradually optimize
the result for more superior performance. Experiments have shown that our
network achieves the state-of-the-art performance on benchmark datasets.
Related papers
- Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Searching for Network Width with Bilaterally Coupled Network [75.43658047510334]
We introduce a new supernet called Bilaterally Coupled Network (BCNet) to address this issue.
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
We propose the first open-source width benchmark on macro structures named Channel-Bench-Macro for the better comparison of width search algorithms.
arXiv Detail & Related papers (2022-03-25T15:32:46Z) - Image Superresolution using Scale-Recurrent Dense Network [30.75380029218373]
Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR)
We propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs))
Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-28T09:18:43Z) - Graph-based Algorithm Unfolding for Energy-aware Power Allocation in
Wireless Networks [27.600081147252155]
We develop a novel graph sumable framework to maximize energy efficiency in wireless communication networks.
We show the permutation training which is a desirable property for models of wireless network data.
Results demonstrate its generalizability across different network topologies.
arXiv Detail & Related papers (2022-01-27T20:23:24Z) - The Principles of Deep Learning Theory [19.33681537640272]
This book develops an effective theory approach to understanding deep neural networks of practical relevance.
We explain how these effectively-deep networks learn nontrivial representations from training.
We show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks.
arXiv Detail & Related papers (2021-06-18T15:00:00Z) - BCNet: Searching for Network Width with Bilaterally Coupled Network [56.14248440683152]
We introduce a new supernet called Bilaterally Coupled Network (BCNet) to address this issue.
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
Our method achieves state-of-the-art or competing performance over other baseline methods.
arXiv Detail & Related papers (2021-05-21T18:54:03Z) - On Topology Optimization and Routing in Integrated Access and Backhaul
Networks: A Genetic Algorithm-based Approach [70.85399600288737]
We study the problem of topology optimization and routing in IAB networks.
We develop efficient genetic algorithm-based schemes for both IAB node placement and non-IAB backhaul link distribution.
We discuss the main challenges for enabling mesh-based IAB networks.
arXiv Detail & Related papers (2021-02-14T21:52:05Z) - Cascade Network with Guided Loss and Hybrid Attention for Finding Good
Correspondences [33.65360396430535]
Given a putative correspondence set of an image pair, we propose a neural network which finds correct correspondences by a binary-class classifier.
We propose a new Guided Loss that can directly use evaluation criterion (Fn-measure) as guidance to dynamically adjust the objective function.
We then propose a hybrid attention block to extract feature, which integrates the Bayesian context normalization (BACN) and channel-wise attention (CA)
arXiv Detail & Related papers (2021-01-31T08:33:20Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable
Optimization Via Overparameterization From Depth [19.866928507243617]
Training deep neural networks with gradient descent (SGD) can often achieve zero training loss on real-world landscapes.
We propose a new limit of infinity deep residual networks, which enjoys a good training in the sense that everyr is global.
arXiv Detail & Related papers (2020-03-11T20:14:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.