Cascade Network with Guided Loss and Hybrid Attention for Finding Good
Correspondences
- URL: http://arxiv.org/abs/2102.00411v1
- Date: Sun, 31 Jan 2021 08:33:20 GMT
- Title: Cascade Network with Guided Loss and Hybrid Attention for Finding Good
Correspondences
- Authors: Zhi Chen, Fan Yang, Wenbing Tao
- Abstract summary: Given a putative correspondence set of an image pair, we propose a neural network which finds correct correspondences by a binary-class classifier.
We propose a new Guided Loss that can directly use evaluation criterion (Fn-measure) as guidance to dynamically adjust the objective function.
We then propose a hybrid attention block to extract feature, which integrates the Bayesian context normalization (BACN) and channel-wise attention (CA)
- Score: 33.65360396430535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding good correspondences is a critical prerequisite in many feature based
tasks. Given a putative correspondence set of an image pair, we propose a
neural network which finds correct correspondences by a binary-class classifier
and estimates relative pose through classified correspondences. First, we
analyze that due to the imbalance in the number of correct and wrong
correspondences, the loss function has a great impact on the classification
results. Thus, we propose a new Guided Loss that can directly use evaluation
criterion (Fn-measure) as guidance to dynamically adjust the objective function
during training. We theoretically prove that the perfect negative correlation
between the Guided Loss and Fn-measure, so that the network is always trained
towards the direction of increasing Fn-measure to maximize it. We then propose
a hybrid attention block to extract feature, which integrates the Bayesian
attentive context normalization (BACN) and channel-wise attention (CA). BACN
can mine the prior information to better exploit global context and CA can
capture complex channel context to enhance the channel awareness of the
network. Finally, based on our Guided Loss and hybrid attention block, a
cascade network is designed to gradually optimize the result for more superior
performance. Experiments have shown that our network achieves the
state-of-the-art performance on benchmark datasets. Our code will be available
in https://github.com/wenbingtao/GLHA.
Related papers
- Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax
Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes [7.433327915285969]
We prove the universal consistency of wide and deep ReLU neural network classifiers trained on the logistic loss.
We also give sufficient conditions for a class of probability measures for which classifiers based on neural networks achieve minimax optimal rates of convergence.
arXiv Detail & Related papers (2024-01-08T23:54:46Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Learning Feature Matching via Matchable Keypoint-Assisted Graph Neural
Network [52.29330138835208]
Accurately matching local features between a pair of images is a challenging computer vision task.
Previous studies typically use attention based graph neural networks (GNNs) with fully-connected graphs over keypoints within/across images.
We propose MaKeGNN, a sparse attention-based GNN architecture which bypasses non-repeatable keypoints and leverages matchable ones to guide message passing.
arXiv Detail & Related papers (2023-07-04T02:50:44Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Contrastive variational information bottleneck for aspect-based
sentiment analysis [36.83876224466177]
We propose to reduce spurious correlations for aspect-based sentiment analysis (ABSA) via a novel Contrastive Variational Information Bottleneck framework (called CVIB)
The proposed CVIB framework is composed of an original network and a self-pruned network, and these two networks are optimized simultaneously via contrastive learning.
Our approach achieves better performance than the strong competitors in terms of overall prediction performance, robustness, and generalization.
arXiv Detail & Related papers (2023-03-06T02:52:37Z) - Bayesian Layer Graph Convolutioanl Network for Hyperspetral Image
Classification [24.91896527342631]
Graph convolutional network (GCN) based models have shown impressive performance.
Deep learning frameworks based on point estimation suffer from low generalization and inability to quantify the classification results uncertainty.
In this paper, we propose a Bayesian layer with Bayesian idea as an insertion layer into point estimation based neural networks.
A Generative Adversarial Network (GAN) is built to solve the sample imbalance problem of HSI dataset.
arXiv Detail & Related papers (2022-11-14T12:56:56Z) - Boundary Attributions Provide Normal (Vector) Explanations [27.20904776964045]
Boundary Attribution (BA) is a new explanation method to address this question.
BA involves computing normal vectors of the local decision boundaries for the target input.
We prove two theorems for ReLU networks: BA of randomized smoothed networks or robustly trained networks is much closer to non-boundary attribution methods than that in standard networks.
arXiv Detail & Related papers (2021-03-20T22:36:39Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Cascade Network with Guided Loss and Hybrid Attention for Two-view
Geometry [32.52184271700281]
We propose a Guided Loss to establish the direct negative correlation between the loss and Fn-measure.
We then propose a hybrid attention block to extract feature.
Experiments have shown that our network achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-07-11T07:44:04Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.