Distributionally Robust Survival Analysis: A Novel Fairness Loss Without
Demographics
- URL: http://arxiv.org/abs/2211.10508v1
- Date: Fri, 18 Nov 2022 20:54:34 GMT
- Title: Distributionally Robust Survival Analysis: A Novel Fairness Loss Without
Demographics
- Authors: Shu Hu, George H. Chen
- Abstract summary: We propose a general approach for training survival analysis models that minimizes a worst-case error across all subpopulations.
This approach uses a training loss function that does not know any demographic information to treat as sensitive.
- Score: 17.945141391585487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a general approach for training survival analysis models that
minimizes a worst-case error across all subpopulations that are large enough
(occurring with at least a user-specified minimum probability). This approach
uses a training loss function that does not know any demographic information to
treat as sensitive. Despite this, we demonstrate that our proposed approach
often scores better on recently established fairness metrics (without a
significant drop in prediction accuracy) compared to various baselines,
including ones which directly use sensitive demographic information in their
training loss. Our code is available at: https://github.com/discovershu/DRO_COX
Related papers
- Alpha and Prejudice: Improving $α$-sized Worst-case Fairness via Intrinsic Reweighting [34.954141077528334]
Worst-case fairness with off-the-shelf demographics group achieves parity by maximizing the model utility of the worst-off group.
Recent advances have reframed this learning problem by introducing the lower bound of minimal partition ratio.
arXiv Detail & Related papers (2024-11-05T13:04:05Z) - Towards Harmless Rawlsian Fairness Regardless of Demographic Prior [57.30787578956235]
We explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set.
We propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses.
arXiv Detail & Related papers (2024-11-04T12:40:34Z) - Fairness in Survival Analysis with Distributionally Robust Optimization [13.159777131162965]
We propose a general approach for encouraging fairness in survival analysis models based on minimizing a worst-case error across all subpopulations.
This approach can be used to convert many existing survival analysis models into ones that simultaneously encourage fairness.
arXiv Detail & Related papers (2024-08-31T15:03:20Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.