Fairness in Submodular Maximization over a Matroid Constraint
- URL: http://arxiv.org/abs/2312.14299v1
- Date: Thu, 21 Dec 2023 21:12:39 GMT
- Title: Fairness in Submodular Maximization over a Matroid Constraint
- Authors: Marwa El Halabi, Jakub Tarnawski, Ashkan Norouzi-Fard, Thuy-Duong
Vuong
- Abstract summary: Submodular over a matroid constraint is a fundamental problem with various applications in machine learning.
We propose various algorithms and impossibility results offering different tradeoffs between quality, fairness, and generality.
- Score: 14.402156575758559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Submodular maximization over a matroid constraint is a fundamental problem
with various applications in machine learning. Some of these applications
involve decision-making over datapoints with sensitive attributes such as
gender or race. In such settings, it is crucial to guarantee that the selected
solution is fairly distributed with respect to this attribute. Recently,
fairness has been investigated in submodular maximization under a cardinality
constraint in both the streaming and offline settings, however the more general
problem with matroid constraint has only been considered in the streaming
setting and only for monotone objectives. This work fills this gap. We propose
various algorithms and impossibility results offering different trade-offs
between quality, fairness, and generality.
Related papers
- MultiMax: Sparse and Multi-Modal Attention Learning [60.49318008131978]
SoftMax is a ubiquitous ingredient of modern machine learning algorithms.
We show that sparsity can be achieved by a family of SoftMax variants, but they often require an alternative loss function and do not preserve multi-modality.
We propose MultiMax, which adaptively modulates the output distribution according to input entry range.
arXiv Detail & Related papers (2024-06-03T10:51:43Z) - Fairness in Streaming Submodular Maximization over a Matroid Constraint [19.27202441717429]
We study the natural generalization of this problem to a matroid constraint.
We give streaming algorithms as well as impossibility results that provide trade-offs between efficiency, quality and fairness.
arXiv Detail & Related papers (2023-05-24T13:10:46Z) - Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage [100.8180383245813]
We propose value-based algorithms for offline reinforcement learning (RL)
We show an analogous result for vanilla Q-functions under a soft margin condition.
Our algorithms' loss functions arise from casting the estimation problems as nonlinear convex optimization problems and Lagrangifying.
arXiv Detail & Related papers (2023-02-05T14:22:41Z) - A Unifying Framework for Online Optimization with Long-Term Constraints [62.35194099438855]
We study online learning problems in which a decision maker has to take a sequence of decisions subject to $m$ long-term constraints.
The goal is to maximize their total reward, while at the same time achieving small cumulative violation across the $T$ rounds.
We present the first best-of-both-world type algorithm for this general class problems, with no-regret guarantees both in the case in which rewards and constraints are selected according to an unknown model, and in the case in which they are selected at each round by an adversary.
arXiv Detail & Related papers (2022-09-15T16:59:19Z) - Group Equality in Adaptive Submodular Maximization [13.619980548779687]
We study the classic submodular problem subject to a group equality constraint under both non-adaptive and adaptive settings.
We develop the first constant approximation algorithm for this problem.
arXiv Detail & Related papers (2022-07-07T15:12:02Z) - Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization subject to a Knapsack Constraint [26.171841086317574]
We study the non-monotone adaptive submodular problem subject to a knapsack constraint.
The state of an item is initially unknown, one must select an item in order to reveal the state of that item.
Our main contribution is to develop a sampling-based randomized algorithm that achieves a $frac1$ approximation for maximizing an adaptive submodular function.
arXiv Detail & Related papers (2021-04-10T20:11:11Z) - Frequency-aware Discriminative Feature Learning Supervised by
Single-Center Loss for Face Forgery Detection [89.43987367139724]
Face forgery detection is raising ever-increasing interest in computer vision.
Recent works have reached sound achievements, but there are still unignorable problems.
A novel frequency-aware discriminative feature learning framework is proposed in this paper.
arXiv Detail & Related papers (2021-03-16T14:17:17Z) - A Unified Joint Maximum Mean Discrepancy for Domain Adaptation [73.44809425486767]
This paper theoretically derives a unified form of JMMD that is easy to optimize.
From the revealed unified JMMD, we illustrate that JMMD degrades the feature-label dependence that benefits to classification.
We propose a novel MMD matrix to promote the dependence, and devise a novel label kernel that is robust to label distribution shift.
arXiv Detail & Related papers (2021-01-25T09:46:14Z) - Fast and Private Submodular and $k$-Submodular Functions Maximization
with Matroid Constraints [27.070004659301162]
We study the problem of maximizing monotone submodular functions subject to matroid constraints in the framework of differential privacy.
We give the first $frac12$-approximation algorithm that preserves $k$submodular functions subject to matroid constraints.
arXiv Detail & Related papers (2020-06-28T23:18:58Z) - Non-convex Min-Max Optimization: Applications, Challenges, and Recent
Theoretical Advances [58.54078318403909]
The min-max problem, also known as the saddle point problem, is a class adversarial problem which is also studied in the context ofsum games.
arXiv Detail & Related papers (2020-06-15T05:33:42Z) - Submodular Bandit Problem Under Multiple Constraints [8.100450025624443]
We introduce a submodular bandit problem under the intersection of $l$ knapsacks and a $k$-system constraint.
To solve this problem, we propose a non-greedy algorithm that adaptively focuses on a standard or modified upper-confidence bound.
We provide a high-probability upper bound of an approximation regret, where the approximation ratio matches that of a fast algorithm.
arXiv Detail & Related papers (2020-06-01T01:28:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.