On the Correctness of the Generalized Isotonic Recursive Partitioning
Algorithm
- URL: http://arxiv.org/abs/2401.04847v2
- Date: Thu, 11 Jan 2024 01:40:56 GMT
- Title: On the Correctness of the Generalized Isotonic Recursive Partitioning
Algorithm
- Authors: Joong-Ho Won and Jihan Jung
- Abstract summary: The paper presents an in-depth analysis of the generalized isotonic recursive partitioning (GIRP) algorithm for fitting isotonic models under separable convex losses.
The GIRP algorithm poseses an attractive feature that in each step of the algorithm, the intermediate solution satisfies the isotonicity constraint.
A small modification of the GIRP suffices to obtain a correct solution and preserve the desired property that all the intermediate solutions are isotonic.
- Score: 9.482025609011123
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents an in-depth analysis of the generalized isotonic
recursive partitioning (GIRP) algorithm for fitting isotonic models under
separable convex losses, proposed by Luss and Rosset [J. Comput. Graph.
Statist., 23 (2014), pp. 192--201] for differentiable losses and extended by
Painsky and Rosset [IEEE Trans. Pattern Anal. Mach. Intell., 38 (2016), pp.
308-321] for nondifferentiable losses. The GIRP algorithm poseses an attractive
feature that in each step of the algorithm, the intermediate solution satisfies
the isotonicity constraint. The paper begins with an example showing that the
GIRP algorithm as described in the literature may fail to produce an isotonic
model, suggesting that the existence and uniqueness of the solution to the
isotonic regression problem must be carefully addressed. It proceeds with
showing that, among possibly many solutions, there indeed exists a solution
that can be found by recursive binary partitioning of the set of observed data.
A small modification of the GIRP algorithm suffices to obtain a correct
solution and preserve the desired property that all the intermediate solutions
are isotonic. This proposed modification includes a proper choice of
intermediate solutions and a simplification of the partitioning step from
ternary to binary.
Related papers
- The Differentiable Feasibility Pump [49.55771920271201]
This paper shows that the traditional feasibility pump and many of its follow-ups can be seen as gradient-descent algorithms with specific parameters.
A central aspect of this reinterpretation is observing that the traditional algorithm differentiates the solution of the linear relaxation with respect to its cost.
arXiv Detail & Related papers (2024-11-05T22:26:51Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - Bregman-divergence-based Arimoto-Blahut algorithm [53.64687146666141]
We generalize the Arimoto-Blahut algorithm to a general function defined over Bregman-divergence system.
We propose a convex-optimization-free algorithm that can be applied to classical and quantum rate-distortion theory.
arXiv Detail & Related papers (2024-08-10T06:16:24Z) - ADMM-MM Algorithm for General Tensor Decomposition [7.0326155922512275]
The proposed algorithm supports three basic loss functions ($ell$-loss, $ell$-loss and KL divergence) and various low-rank tensor decomposition models (CP, Tucker, TT, and TR decompositions)
We show that wide-range applications can be solved by the proposed algorithm, and can be easily extended to any established tensor decomposition models in a plug-and-play manner.
arXiv Detail & Related papers (2023-12-19T00:17:34Z) - On permutation symmetries in Bayesian neural network posteriors: a
variational perspective [8.310462710943971]
We show that there is essentially no loss barrier between the local solutions of gradient descent.
This raises questions for approximate inference in Bayesian neural networks.
We propose a matching algorithm to search for linearly connected solutions.
arXiv Detail & Related papers (2023-10-16T08:26:50Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Decentralized Inexact Proximal Gradient Method With Network-Independent
Stepsizes for Convex Composite Optimization [39.352542703876104]
This paper considers decentralized convex composite optimization over undirected and connected networks.
A novel CTA (Combine-Then-Adapt)-based decentralized algorithm is proposed under uncoordinated network-independent constant stepsizes.
arXiv Detail & Related papers (2023-02-07T03:50:38Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Lattice partition recovery with dyadic CART [79.96359947166592]
We study piece-wise constant signals corrupted by additive Gaussian noise over a $d$-dimensional lattice.
Data of this form naturally arise in a host of applications, and the tasks of signal detection or testing, de-noising and estimation have been studied extensively in the statistical and signal processing literature.
In this paper we consider instead the problem of partition recovery, i.e.of estimating the partition of the lattice induced by the constancy regions of the unknown signal.
We prove that a DCART-based procedure consistently estimates the underlying partition at a rate of order $sigma2 k*
arXiv Detail & Related papers (2021-05-27T23:41:01Z) - Partition-based formulations for mixed-integer optimization of trained
ReLU neural networks [66.88252321870085]
This paper introduces a class of mixed-integer formulations for trained ReLU neural networks.
At one extreme, one partition per input recovers the convex hull of a node, i.e., the tightest possible formulation for each node.
arXiv Detail & Related papers (2021-02-08T17:27:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.