Fair CCA for Fair Representation Learning: An ADNI Study
- URL: http://arxiv.org/abs/2507.09382v1
- Date: Sat, 12 Jul 2025 19:36:10 GMT
- Title: Fair CCA for Fair Representation Learning: An ADNI Study
- Authors: Bojian Hou, Zhanliang Wang, Zhuoping Zhou, Boning Tong, Zexuan Wang, Jingxuan Bao, Duy Duong-Tran, Qi Long, Li Shen,
- Abstract summary: We propose a novel fair Canonical correlation analysis (CCA) method for fair representation learning.<n>We validate our method on synthetic data and real-world data from the Alzheimer's Disease Neuroimaging Initiative (ADNI)<n>Our work enables fair machine learning in neuroimaging studies where unbiased analysis is essential.
- Score: 9.49522932877178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Canonical correlation analysis (CCA) is a technique for finding correlations between different data modalities and learning low-dimensional representations. As fairness becomes crucial in machine learning, fair CCA has gained attention. However, previous approaches often overlook the impact on downstream classification tasks, limiting applicability. We propose a novel fair CCA method for fair representation learning, ensuring the projected features are independent of sensitive attributes, thus enhancing fairness without compromising accuracy. We validate our method on synthetic data and real-world data from the Alzheimer's Disease Neuroimaging Initiative (ADNI), demonstrating its ability to maintain high correlation analysis performance while improving fairness in classification tasks. Our work enables fair machine learning in neuroimaging studies where unbiased analysis is essential.
Related papers
- GFLC: Graph-based Fairness-aware Label Correction for Fair Classification [0.03683202928838613]
Graph-based Fairness-aware Label Correction (GFLC) is an efficient method for correcting label noise while preserving demographic parity in datasets.<n>Our approach combines three key components: prediction confidence measure, graph-based regularization through Ricci-flow-optimized graph Laplacians, and explicit demographic parity incentives.
arXiv Detail & Related papers (2025-06-18T16:51:26Z) - Fair Sufficient Representation Learning [2.308168896770315]
We introduce a Fair Sufficient Representation Learning (FSRL) method that balances sufficiency and fairness.<n>FSRL is based on a convex combination of an objective function for learning a sufficient representation and an objective function that ensures fairness.<n>Our approach manages fairness and sufficiency at the representation level, offering a novel perspective on fair representation learning.
arXiv Detail & Related papers (2025-03-29T10:37:49Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - You Can Still Achieve Fairness Without Sensitive Attributes: Exploring
Biases in Non-Sensitive Features [29.94644351343916]
We propose a novel framework which simultaneously uses these related features for accurate prediction and regularizing the model to be fair.
Experimental results on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2021-04-29T17:52:11Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.