Unsupervised Learning for Robust Fitting:A Reinforcement Learning
Approach
- URL: http://arxiv.org/abs/2103.03501v1
- Date: Fri, 5 Mar 2021 07:14:00 GMT
- Title: Unsupervised Learning for Robust Fitting:A Reinforcement Learning
Approach
- Authors: Giang Truong, Huu Le, David Suter, Erchuan Zhang, Syed Zulqarnain
Gilani
- Abstract summary: We introduce a novel framework that learns to solve robust model fitting.
Unlike other methods, our work is agnostic to the underlying input features.
We empirically show that our method outperforms existing learning approaches.
- Score: 25.851792661168698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust model fitting is a core algorithm in a large number of computer vision
applications. Solving this problem efficiently for datasets highly contaminated
with outliers is, however, still challenging due to the underlying
computational complexity. Recent literature has focused on learning-based
algorithms. However, most approaches are supervised which require a large
amount of labelled training data. In this paper, we introduce a novel
unsupervised learning framework that learns to directly solve robust model
fitting. Unlike other methods, our work is agnostic to the underlying input
features, and can be easily generalized to a wide variety of LP-type problems
with quasi-convex residuals. We empirically show that our method outperforms
existing unsupervised learning approaches, and achieves competitive results
compared to traditional methods on several important computer vision problems.
Related papers
- Simple Ingredients for Offline Reinforcement Learning [86.1988266277766]
offline reinforcement learning algorithms have proven effective on datasets highly connected to the target downstream task.
We show that existing methods struggle with diverse data: their performance considerably deteriorates as data collected for related but different tasks is simply added to the offline buffer.
We show that scale, more than algorithmic considerations, is the key factor influencing performance.
arXiv Detail & Related papers (2024-03-19T18:57:53Z) - A General Framework for Learning from Weak Supervision [93.89870459388185]
This paper introduces a general framework for learning from weak supervision (GLWS) with a novel algorithm.
Central to GLWS is an Expectation-Maximization (EM) formulation, adeptly accommodating various weak supervision sources.
We also present an advanced algorithm that significantly simplifies the EM computational demands.
arXiv Detail & Related papers (2024-02-02T21:48:50Z) - A Novel Differentiable Loss Function for Unsupervised Graph Neural
Networks in Graph Partitioning [5.22145960878624]
The graph partitioning problem is recognized as an NP-hard prob-lem.
We introduce a novel pipeline employing an unsupervised graph neural network to solve the graph partitioning problem.
We rigor-ously evaluate our methodology against contemporary state-of-the-art tech-niques, focusing on metrics: cuts and balance, and our findings reveal that our is competitive with these leading methods.
arXiv Detail & Related papers (2023-12-11T23:03:17Z) - Neural Algorithmic Reasoning Without Intermediate Supervision [21.852775399735005]
We focus on learning neural algorithmic reasoning only from the input-output pairs without appealing to the intermediate supervision.
We build a self-supervised objective that can regularise intermediate computations of the model without access to the algorithm trajectory.
We demonstrate that our approach is competitive to its trajectory-supervised counterpart on tasks from the CLRSic Algorithmic Reasoning Benchmark.
arXiv Detail & Related papers (2023-06-23T09:57:44Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - A Novel Plug-and-Play Approach for Adversarially Robust Generalization [26.29269757430314]
We propose a robust framework that employs adversarially robust training to safeguard the machine learning models against perturbed testing data.
We achieve this by incorporating the worst-case additive adversarial error within a fixed budget for each sample during model estimation.
arXiv Detail & Related papers (2022-08-19T17:02:55Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Fast Multi-label Learning [19.104773591885884]
The goal of this paper is to provide a simple method, yet with provable guarantees, which can achieve competitive performance without a complex training process.
arXiv Detail & Related papers (2021-08-31T01:07:42Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.