Leximax Approximations and Representative Cohort Selection
- URL: http://arxiv.org/abs/2205.01157v2
- Date: Tue, 17 May 2022 18:32:04 GMT
- Title: Leximax Approximations and Representative Cohort Selection
- Authors: Monika Henzinger, Charlotte Peale, Omer Reingold, Judy Hanwen Shen
- Abstract summary: Finding a representative cohort from a broad pool of candidates is a goal that arises in many contexts.
Finding a leximax solution can be highly dependent on small variations in the utility of certain groups.
We show that finding an integer solution to leximax cohort selection with linear utilities is NP-Hard.
- Score: 10.55182802721649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding a representative cohort from a broad pool of candidates is a goal
that arises in many contexts such as choosing governing committees and consumer
panels. While there are many ways to define the degree to which a cohort
represents a population, a very appealing solution concept is lexicographic
maximality (leximax) which offers a natural (pareto-optimal like)
interpretation that the utility of no population can be increased without
decreasing the utility of a population that is already worse off. However,
finding a leximax solution can be highly dependent on small variations in the
utility of certain groups. In this work, we explore new notions of approximate
leximax solutions with three distinct motivations: better algorithmic
efficiency, exploiting significant utility improvements, and robustness to
noise. Among other definitional contributions, we give a new notion of an
approximate leximax that satisfies a similarly appealing semantic
interpretation and relate it to algorithmically-feasible approximate leximax
notions. When group utilities are linear over cohort candidates, we give an
efficient polynomial-time algorithm for finding a leximax distribution over
cohort candidates in the exact as well as in the approximate setting.
Furthermore, we show that finding an integer solution to leximax cohort
selection with linear utilities is NP-Hard.
Related papers
- DALex: Lexicase-like Selection via Diverse Aggregation [6.394522608656896]
We show that DALex (for Diversely Aggregated Lexicase) achieves significant speedups over lexicase selection and its relaxed variants.
Results on program synthesis, deep learning, symbolic regression, and learning systems demonstrate that DALex achieves significant speedups over lexicase selection and its relaxed variants.
arXiv Detail & Related papers (2024-01-23T01:20:15Z) - Calculating lexicase selection probabilities is NP-Hard [0.0]
I prove that this problem, which I name lex-prob, is NP-Hard.
I achieve this proof by reducing SAT, a well-known NP-Complete problem, to lex-prob in time.
arXiv Detail & Related papers (2023-01-17T06:51:44Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Optimal Gradient-based Algorithms for Non-concave Bandit Optimization [76.57464214864756]
This work considers a large family of bandit problems where the unknown underlying reward function is non-concave.
Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality.
We show that the standard optimistic algorithms are sub-optimal by dimension factors.
arXiv Detail & Related papers (2021-07-09T16:04:24Z) - Preference learning along multiple criteria: A game-theoretic
perspective [97.94912276610002]
We generalize the notion of a von Neumann winner to the multi-criteria setting by taking inspiration from Blackwell's approachability.
Our framework allows for non-linear aggregation of preferences across criteria, and generalizes the linearization-based approach from multi-objective optimization.
We show that the Blackwell winner of a multi-criteria problem instance can be computed as the solution to a convex optimization problem.
arXiv Detail & Related papers (2021-05-05T03:23:11Z) - Certifiably Polynomial Algorithm for Best Group Subset Selection [0.9667631210393929]
Best group subset selection aims to choose a small part of non-overlapping groups to achieve the best interpretability on the response variable.
We propose a group-splicing algorithm that iteratively detects effective groups and excludes the helpless ones.
We demonstrate the efficiency and accuracy of our proposal by comparing state-of-the-art algorithms on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-04-23T03:05:11Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Lazy Greedy Hypervolume Subset Selection from Large Candidate Solution
Sets [5.222705629027499]
We propose a new lazy greedy algorithm exploiting the submodular property of the hypervolume indicator.
Experimental results show that the proposed algorithm is hundreds of times faster than the original greedy inclusion algorithm.
arXiv Detail & Related papers (2020-07-04T09:19:45Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z) - Optimal Clustering from Noisy Binary Feedback [75.17453757892152]
We study the problem of clustering a set of items from binary user feedback.
We devise an algorithm with a minimal cluster recovery error rate.
For adaptive selection, we develop an algorithm inspired by the derivation of the information-theoretical error lower bounds.
arXiv Detail & Related papers (2019-10-14T09:18:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.