Group-wise oracle-efficient algorithms for online multi-group learning
- URL: http://arxiv.org/abs/2406.05287v1
- Date: Fri, 7 Jun 2024 23:00:02 GMT
- Title: Group-wise oracle-efficient algorithms for online multi-group learning
- Authors: Samuel Deng, Daniel Hsu, Jingwen Liu,
- Abstract summary: We study the problem of online multi-group learning, a learning model in which an online learner must simultaneously achieve small prediction regret on a large collection of subsequences corresponding to a family of groups.
In this paper, we design such oracle-efficient algorithms with sublinear regret under a variety of settings, including adversarial and adversarial transductive settings.
- Score: 12.664869982542895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of online multi-group learning, a learning model in which an online learner must simultaneously achieve small prediction regret on a large collection of (possibly overlapping) subsequences corresponding to a family of groups. Groups are subsets of the context space, and in fairness applications, they may correspond to subpopulations defined by expressive functions of demographic attributes. In contrast to previous work on this learning model, we consider scenarios in which the family of groups is too large to explicitly enumerate, and hence we seek algorithms that only access groups via an optimization oracle. In this paper, we design such oracle-efficient algorithms with sublinear regret under a variety of settings, including: (i) the i.i.d. setting, (ii) the adversarial setting with smoothed context distributions, and (iii) the adversarial transductive setting.
Related papers
- Enhancing Neural Subset Selection: Integrating Background Information into Set Representations [53.15923939406772]
We show that when the target value is conditioned on both the input set and subset, it is essential to incorporate an textitinvariant sufficient statistic of the superset into the subset of interest.
This ensures that the output value remains invariant to permutations of the subset and its corresponding superset, enabling identification of the specific superset from which the subset originated.
arXiv Detail & Related papers (2024-02-05T16:09:35Z) - Multi-group Learning for Hierarchical Groups [12.473780585666768]
We extend the study of multi-group learning to the natural case where the groups are hierarchically structured.
We design an algorithm for this setting that outputs an interpretable and deterministic decision tree predictor with near-optimal sample complexity.
arXiv Detail & Related papers (2024-02-01T01:06:32Z) - A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - Learning by Grouping: A Multilevel Optimization Framework for Improving
Fairness in Classification without Losing Accuracy [19.84719054826755]
In some cases, AI systems can be unfair by exhibiting bias or discrimination against certain social groups.
We propose a novel machine learning framework where the ML model learns to group a diverse set of problems into distinct subgroups to solve each subgroup.
Our proposed framework involves three stages of learning, which are formulated as a three-level optimization problem.
arXiv Detail & Related papers (2023-04-02T08:45:08Z) - Group conditional validity via multi-group learning [5.797821810358083]
We consider the problem of distribution-free conformal prediction and the criterion of group conditional validity.
Existing methods achieve such guarantees under either restrictive grouping structure or distributional assumptions.
We propose a simple reduction to the problem of achieving validity guarantees for individual populations by leveraging algorithms for a problem called multi-group learning.
arXiv Detail & Related papers (2023-03-07T15:51:03Z) - Fair and skill-diverse student group formation via constrained k-way
graph partitioning [65.29889537564455]
This work introduces an unsupervised algorithm for fair and skill-diverse student group formation.
The skill sets of students are determined using unsupervised dimensionality reduction of course mark data via the Laplacian eigenmap.
The problem is formulated as a constrained graph partitioning problem, whereby the diversity of skill sets in each group are maximised.
arXiv Detail & Related papers (2023-01-12T14:02:49Z) - Outlier-Robust Group Inference via Gradient Space Clustering [50.87474101594732]
Existing methods can improve the worst-group performance, but they require group annotations, which are often expensive and sometimes infeasible to obtain.
We address the problem of learning group annotations in the presence of outliers by clustering the data in the space of gradients of the model parameters.
We show that data in the gradient space has a simpler structure while preserving information about minority groups and outliers, making it suitable for standard clustering methods like DBSCAN.
arXiv Detail & Related papers (2022-10-13T06:04:43Z) - Addressing Missing Sources with Adversarial Support-Matching [8.53946780558779]
We investigate a scenario in which the absence of certain data is linked to the second level of a two-level hierarchy in the data.
Inspired by the idea of protected groups from algorithmic fairness, we refer to the partitions carved by this second level as "subgroups"
We make use of an additional, diverse but unlabeled dataset, called the "deployment set", to learn a representation that is invariant to subgroup.
arXiv Detail & Related papers (2022-03-24T16:19:19Z) - Towards Group Robustness in the presence of Partial Group Labels [61.33713547766866]
spurious correlations between input samples and the target labels wrongly direct the neural network predictions.
We propose an algorithm that optimize for the worst-off group assignments from a constraint set.
We show improvements in the minority group's performance while preserving overall aggregate accuracy across groups.
arXiv Detail & Related papers (2022-01-10T22:04:48Z) - Simple and near-optimal algorithms for hidden stratification and multi-group learning [13.337579367787253]
This paper studies the structure of solutions to the multi-group learning problem.
It provides simple and near-optimal algorithms for the learning problem.
arXiv Detail & Related papers (2021-12-22T19:16:24Z) - Focus on the Common Good: Group Distributional Robustness Follows [47.62596240492509]
This paper proposes a new and simple algorithm that explicitly encourages learning of features that are shared across various groups.
While Group-DRO focuses on groups with worst regularized loss, focusing instead, on groups that enable better performance even on other groups, could lead to learning of shared/common features.
arXiv Detail & Related papers (2021-10-06T09:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.