Towards Stable Preferences for Stakeholder-aligned Machine Learning
- URL: http://arxiv.org/abs/2401.15268v2
- Date: Fri, 2 Feb 2024 20:39:57 GMT
- Title: Towards Stable Preferences for Stakeholder-aligned Machine Learning
- Authors: Haleema Sheraz, Stefan C. Kremer, Joshua August Skorburg, Graham
Taylor, Walter Sinnott-Armstrong, Kyle Boerstler
- Abstract summary: The primary objective of this study is to create a method for learning both individual and group-level preferences pertaining to kidney allocations.
By incorporating stakeholder preferences into the kidney allocation process, we aspire to advance the ethical dimensions of organ transplantation.
- Score: 0.48533995158972176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In response to the pressing challenge of kidney allocation, characterized by
growing demands for organs, this research sets out to develop a data-driven
solution to this problem, which also incorporates stakeholder values. The
primary objective of this study is to create a method for learning both
individual and group-level preferences pertaining to kidney allocations.
Drawing upon data from the 'Pairwise Kidney Patient Online Survey.' Leveraging
two distinct datasets and evaluating across three levels - Individual, Group
and Stability - we employ machine learning classifiers assessed through several
metrics. The Individual level model predicts individual participant
preferences, the Group level model aggregates preferences across participants,
and the Stability level model, an extension of the Group level, evaluates the
stability of these preferences over time. By incorporating stakeholder
preferences into the kidney allocation process, we aspire to advance the
ethical dimensions of organ transplantation, contributing to more transparent
and equitable practices while promoting the integration of moral values into
algorithmic decision-making.
Related papers
- A decision-theoretic model for a principal-agent collaborative learning problem [0.0]
We consider a collaborative learning framework with principal-agent setting in which the principal determines a set of appropriate aggregation coefficients.
The proposed framework offers some advantages in terms of stability and generalization, despite that both the principal and the agents do not necessarily need to have any knowledge of the sample distributions or the quality of each others' datasets.
arXiv Detail & Related papers (2024-09-24T13:08:51Z) - Evaluating Fair Feature Selection in Machine Learning for Healthcare [0.9222623206734782]
We explore algorithmic fairness from the perspective of feature selection.
We evaluate a fair feature selection method that considers equal importance to all demographic groups.
We tested our approach on three publicly available healthcare datasets.
arXiv Detail & Related papers (2024-03-28T06:24:04Z) - Federated Two Stage Decoupling With Adaptive Personalization Layers [5.69361786082969]
Federated learning has gained significant attention due to its ability to enable distributed learning while maintaining privacy constraints.
It inherently experiences significant learning degradation and slow convergence speed.
It is natural to employ the concept of clustering homogeneous clients into the same group, allowing only the model weights within each group to be aggregated.
arXiv Detail & Related papers (2023-08-30T07:46:32Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Recommendations for Bayesian hierarchical model specifications for
case-control studies in mental health [0.0]
Researchers must choose whether to assume all subjects are drawn from a common population, or to model them as deriving from separate populations.
We ran systematic simulations on synthetic multi-group behavioural data from a commonly used bandit task.
We found that fitting groups separately provided the most accurate and robust inference across all conditions.
arXiv Detail & Related papers (2020-11-03T14:19:59Z) - Aligning with Heterogeneous Preferences for Kidney Exchange [7.858296711223292]
We propose a methodology for prioritizing patients based on heterogeneous moral preferences.
We find that this methodology increases the average rank of matched patients in the sampled preference ordering, indicating better satisfaction of group preferences.
arXiv Detail & Related papers (2020-06-16T21:16:53Z) - Temporal Phenotyping using Deep Predictive Clustering of Disease
Progression [97.88605060346455]
We develop a deep learning approach for clustering time-series data, where each cluster comprises patients who share similar future outcomes of interest.
Experiments on two real-world datasets show that our model achieves superior clustering performance over state-of-the-art benchmarks.
arXiv Detail & Related papers (2020-06-15T20:48:43Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Predictive Modeling of ICU Healthcare-Associated Infections from
Imbalanced Data. Using Ensembles and a Clustering-Based Undersampling
Approach [55.41644538483948]
This work is focused on both the identification of risk factors and the prediction of healthcare-associated infections in intensive-care units.
The aim is to support decision making addressed at reducing the incidence rate of infections.
arXiv Detail & Related papers (2020-05-07T16:13:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.