Zero-Assignment Constraint for Graph Matching with Outliers
- URL: http://arxiv.org/abs/2003.11928v1
- Date: Thu, 26 Mar 2020 14:11:10 GMT
- Title: Zero-Assignment Constraint for Graph Matching with Outliers
- Authors: Fudong Wang and Nan Xue and Jin-Gang Yu and Gui-Song Xia
- Abstract summary: We present the zero-assignment constraint (ZAC) for approaching the graph matching problem in the presence of outliers.
The underlying idea is to suppress the matchings of outliers by assigning zero-valued vectors to the potential outliers in the obtained optimal correspondence matrix.
We design an efficient outlier-robust algorithm to significantly reduce the incorrect or redundant matchings caused by numerous outliers.
- Score: 40.02444837257561
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph matching (GM), as a longstanding problem in computer vision and pattern
recognition, still suffers from numerous cluttered outliers in practical
applications. To address this issue, we present the zero-assignment constraint
(ZAC) for approaching the graph matching problem in the presence of outliers.
The underlying idea is to suppress the matchings of outliers by assigning
zero-valued vectors to the potential outliers in the obtained optimal
correspondence matrix. We provide elaborate theoretical analysis to the
problem, i.e., GM with ZAC, and figure out that the GM problem with and without
outliers are intrinsically different, which enables us to put forward a
sufficient condition to construct valid and reasonable objective function.
Consequently, we design an efficient outlier-robust algorithm to significantly
reduce the incorrect or redundant matchings caused by numerous outliers.
Extensive experiments demonstrate that our method can achieve the
state-of-the-art performance in terms of accuracy and efficiency, especially in
the presence of numerous outliers.
Related papers
- A Contrastive Variational Graph Auto-Encoder for Node Clustering [10.52321770126932]
State-of-the-art clustering methods have numerous challenges.
Existing VGAEs do not account for the discrepancy between the inference and generative models.
Our solution has two mechanisms to control the trade-off between Feature Randomness and Feature Drift.
arXiv Detail & Related papers (2023-12-28T05:07:57Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Support Vector Machines with the Hard-Margin Loss: Optimal Training via
Combinatorial Benders' Cuts [8.281391209717105]
We show how to train the hard-margin SVM model to global optimality.
We introduce an iterative sampling and sub decomposition algorithm that solves the problem.
arXiv Detail & Related papers (2022-07-15T18:21:51Z) - AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail
Problems [102.95119281306893]
We present an early trial to explore adversarial training methods to optimize AUC.
We reformulate the AUC optimization problem as a saddle point problem, where the objective becomes an instance-wise function.
Our analysis differs from the existing studies since the algorithm is asked to generate adversarial examples by calculating the gradient of a min-max problem.
arXiv Detail & Related papers (2022-06-24T09:13:39Z) - Multi-task Learning of Order-Consistent Causal Graphs [59.9575145128345]
We consider the problem of discovering $K related Gaussian acyclic graphs (DAGs)
Under multi-task learning setting, we propose a $l_1/l$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models.
We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order.
arXiv Detail & Related papers (2021-11-03T22:10:18Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - ROBIN: a Graph-Theoretic Approach to Reject Outliers in Robust
Estimation using Invariants [30.19476775410544]
Outliers are typically the result of incorrect data association or feature matching.
Current approaches for robust estimation fail to produce accurate estimates in the presence of many outliers.
This paper develops an approach to prune outliers.
arXiv Detail & Related papers (2020-11-07T02:09:33Z) - Outlier-Robust Estimation: Hardness, Minimally Tuned Algorithms, and
Applications [25.222024234900445]
This paper introduces two unifying formulations for outlier-robust estimation, Generalized Maximum Consensus (G-MC) and Generalized Truncated Least Squares (G-TLS)
Our first contribution is a proof that outlier-robust estimation is inapproximable: in the worst case, it is impossible to (even approximately) find the set of outliers.
We propose the first minimally tuned algorithms for outlier rejection, that dynamically decide how to separate inliers from outliers.
arXiv Detail & Related papers (2020-07-29T21:06:13Z) - High-Dimensional Robust Mean Estimation via Gradient Descent [73.61354272612752]
We show that the problem of robust mean estimation in the presence of a constant adversarial fraction can be solved by gradient descent.
Our work establishes an intriguing connection between the near non-lemma estimation and robust statistics.
arXiv Detail & Related papers (2020-05-04T10:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.