Gleo-Det: Deep Convolution Feature-Guided Detector with Local Entropy
Optimization for Salient Points
- URL: http://arxiv.org/abs/2204.12884v1
- Date: Wed, 27 Apr 2022 12:40:21 GMT
- Title: Gleo-Det: Deep Convolution Feature-Guided Detector with Local Entropy
Optimization for Salient Points
- Authors: Chao Li, Yanan You, Wenli Zhou
- Abstract summary: We propose to achieve fine constraint based on the requirement of repeatability while coarse constraint with guidance of deep convolution features.
With the guidance of convolution features, we define the cost function from both positive and negative sides.
- Score: 5.955667705173262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature detection is an important procedure for image matching, where
unsupervised feature detection methods are the detection approaches that have
been mostly studied recently, including the ones that are based on
repeatability requirement to define loss functions, and the ones that attempt
to use descriptor matching to drive the optimization of the pipelines. For the
former type, mean square error (MSE) is usually used which cannot provide
strong constraint for training and can make the model easy to be stuck into the
collapsed solution. For the later one, due to the down sampling operation and
the expansion of receptive fields, the details can be lost for local
descriptors can be lost, making the constraint not fine enough. Considering the
issues above, we propose to combine both ideas, which including three aspects.
1) We propose to achieve fine constraint based on the requirement of
repeatability while coarse constraint with guidance of deep convolution
features. 2) To address the issue that optimization with MSE is limited,
entropy-based cost function is utilized, both soft cross-entropy and
self-information. 3) With the guidance of convolution features, we define the
cost function from both positive and negative sides. Finally, we study the
effect of each modification proposed and experiments demonstrate that our
method achieves competitive results over the state-of-the-art approaches.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - OIL-AD: An Anomaly Detection Framework for Sequential Decision Sequences [16.828732283348817]
We propose an unsupervised method named Offline Learning based Anomaly Detection (OIL-AD)
OIL-AD detects anomalies in decision-making sequences using two extracted behaviour features: action optimality and sequential association.
Our experiments show that OIL-AD can achieve outstanding online anomaly detection performance with up to 34.8% improvement in F1 score over comparable baselines.
arXiv Detail & Related papers (2024-02-07T04:06:53Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - Fine-grained Retrieval Prompt Tuning [149.9071858259279]
Fine-grained Retrieval Prompt Tuning steers a frozen pre-trained model to perform the fine-grained retrieval task from the perspectives of sample prompt and feature adaptation.
Our FRPT with fewer learnable parameters achieves the state-of-the-art performance on three widely-used fine-grained datasets.
arXiv Detail & Related papers (2022-07-29T04:10:04Z) - Determinantal Point Process Likelihoods for Sequential Recommendation [12.206748373325972]
We propose two new loss functions based on the Determinantal Point Process (DPP) likelihood, that can be adaptively applied to estimate the subsequent item or items.
Experimental results using the proposed loss functions on three real-world datasets show marked improvements over state-of-the-art sequential recommendation methods in both quality and diversity metrics.
arXiv Detail & Related papers (2022-04-25T11:20:10Z) - Efficient Neural Network Analysis with Sum-of-Infeasibilities [64.31536828511021]
Inspired by sum-of-infeasibilities methods in convex optimization, we propose a novel procedure for analyzing verification queries on networks with extensive branching functions.
An extension to a canonical case-analysis-based complete search procedure can be achieved by replacing the convex procedure executed at each search state with DeepSoI.
arXiv Detail & Related papers (2022-03-19T15:05:09Z) - Causality-based Counterfactual Explanation for Classification Models [11.108866104714627]
We propose a prototype-based counterfactual explanation framework (ProCE)
ProCE is capable of preserving the causal relationship underlying the features of the counterfactual data.
In addition, we design a novel gradient-free optimization based on the multi-objective genetic algorithm that generates the counterfactual explanations.
arXiv Detail & Related papers (2021-05-03T09:25:59Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Extracting Optimal Solution Manifolds using Constrained Neural
Optimization [6.800113407368289]
Constrained Optimization solution algorithms are restricted to point based solutions.
We present an approach for extracting optimal sets as approximate, where unmodified non-informed constraints are defined.
arXiv Detail & Related papers (2020-09-13T15:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.