A Novel Framework for Improving the Breakdown Point of Robust Regression
Algorithms
- URL: http://arxiv.org/abs/2305.12220v1
- Date: Sat, 20 May 2023 15:59:33 GMT
- Title: A Novel Framework for Improving the Breakdown Point of Robust Regression
Algorithms
- Authors: Zheyi Fan, Szu Hui Ng, Qingpei Hu
- Abstract summary: We present an effective framework for improving the breakdown point of robust regression algorithms.
We derive a consistent robust regression algorithm with iterative local search (CORALS)
- Score: 1.9594639581421422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an effective framework for improving the breakdown point of robust
regression algorithms. Robust regression has attracted widespread attention due
to the ubiquity of outliers, which significantly affect the estimation results.
However, many existing robust least-squares regression algorithms suffer from a
low breakdown point, as they become stuck around local optima when facing
severe attacks. By expanding on the previous work, we propose a novel framework
that enhances the breakdown point of these algorithms by inserting a prior
distribution in each iteration step, and adjusting the prior distribution
according to historical information. We apply this framework to a specific
algorithm and derive the consistent robust regression algorithm with iterative
local search (CORALS). The relationship between CORALS and momentum gradient
descent is described, and a detailed proof of the theoretical convergence of
CORALS is presented. Finally, we demonstrate that the breakdown point of CORALS
is indeed higher than that of the algorithm from which it is derived. We apply
the proposed framework to other robust algorithms, and show that the improved
algorithms achieve better results than the original algorithms, indicating the
effectiveness of the proposed framework.
Related papers
- Robust empirical risk minimization via Newton's method [9.797319790710711]
A new variant of Newton's method for empirical risk minimization is studied.
The gradient and Hessian of the objective function are replaced by robust estimators.
An algorithm for obtaining robust Newton directions based on the conjugate gradient method is also proposed.
arXiv Detail & Related papers (2023-01-30T18:54:54Z) - Best Subset Selection in Reduced Rank Regression [1.4699455652461724]
We show that our algorithm can achieve the reduced rank estimation with a significant probability.
The numerical studies and an application in the cancer studies demonstrate effectiveness and scalability.
arXiv Detail & Related papers (2022-11-29T02:51:15Z) - HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection [75.84584400866254]
We propose a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses.
HARRIS' decisions are based on a forest model, whose trees are created based on optimized on a hybrid ranking and regression loss function.
arXiv Detail & Related papers (2022-10-31T14:06:11Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Robust recovery for stochastic block models [16.74630355427558]
We develop an efficient algorithm for weak recovery in a robust version of the block model.
Our results show that there is no price of robustness in the block model.
arXiv Detail & Related papers (2021-11-16T15:43:00Z) - Robust Regression Revisited: Acceleration and Improved Estimation Rates [25.54653340884806]
We study fast algorithms for statistical regression problems under the strong contamination model.
The goal is to approximately optimize a generalized linear model (GLM) given adversarially corrupted samples.
We present nearly-linear time algorithms for robust regression problems with improved runtime or estimation guarantees.
arXiv Detail & Related papers (2021-06-22T17:21:56Z) - Correlated Bandits for Dynamic Pricing via the ARC algorithm [2.7564955518050693]
Asymptotic Randomised Control provides a rigorous approximation to the optimal strategy for a wide class of Bayesian bandits.
It allows a decision maker to observe signals in addition to their rewards, to incorporate correlations between the outcomes of different choices, and to have nontrivial dynamics for their estimates.
arXiv Detail & Related papers (2021-02-08T14:54:26Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.