FairLRF: Achieving Fairness through Sparse Low Rank Factorization
- URL: http://arxiv.org/abs/2511.16549v1
- Date: Thu, 20 Nov 2025 17:01:52 GMT
- Title: FairLRF: Achieving Fairness through Sparse Low Rank Factorization
- Authors: Yuanbo Guo, Jun Xia, Yiyu Shi,
- Abstract summary: We propose a fairness-oriented low rank factorization (LRF) framework that leverages singular value decomposition (SVD) to improve model fairness.<n>We show that our method outperforms conventional LRF methods as well as state-of-the-art fairness-enhancing techniques.
- Score: 7.588768993519323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep learning (DL) techniques become integral to various applications, ensuring model fairness while maintaining high performance has become increasingly critical, particularly in sensitive fields such as medical diagnosis. Although a variety of bias-mitigation methods have been proposed, many rely on computationally expensive debiasing strategies or suffer substantial drops in model accuracy, which limits their practicality in real-world, resource-constrained settings. To address this issue, we propose a fairness-oriented low rank factorization (LRF) framework that leverages singular value decomposition (SVD) to improve DL model fairness. Unlike traditional SVD, which is mainly used for model compression by decomposing and reducing weight matrices, our work shows that SVD can also serve as an effective tool for fairness enhancement. Specifically, we observed that elements in the unitary matrices obtained from SVD contribute unequally to model bias across groups defined by sensitive attributes. Motivated by this observation, we propose a method, named FairLRF, that selectively removes bias-inducing elements from unitary matrices to reduce group disparities, thus enhancing model fairness. Extensive experiments show that our method outperforms conventional LRF methods as well as state-of-the-art fairness-enhancing techniques. Additionally, an ablation study examines how major hyper-parameters may influence the performance of processed models. To the best of our knowledge, this is the first work utilizing SVD not primarily for compression but for fairness enhancement.
Related papers
- Benchmarking Bias Mitigation Toward Fairness Without Harm from Vision to LVLMs [14.88523903012028]
Machine learning models trained on real-world data often inherit and amplify biases against certain social groups.<n>We introduce NH-Fair, a unified benchmark for fairness without harm under standardized data, metrics, and training protocols.
arXiv Detail & Related papers (2026-02-03T08:37:37Z) - Data-regularized Reinforcement Learning for Diffusion Models at Scale [99.01056178660538]
We introduce Data-regularized Diffusion Reinforcement Learning ( DDRL), a novel framework that uses the forward KL divergence to anchor the policy to an off-policy data distribution.<n>With over a million GPU hours of experiments and ten thousand double-blind evaluations, we demonstrate that DDRL significantly improves rewards while alleviating the reward hacking seen in RLs.
arXiv Detail & Related papers (2025-12-03T23:45:07Z) - Decoupled DMD: CFG Augmentation as the Spear, Distribution Matching as the Shield [54.328202401611264]
Diffusion model distillation has emerged as a powerful technique for creating efficient few-step and single-step generators.<n>We show that the primary driver of few-step distillation is not distribution matching, but a previously overlooked component we identify as CFG Augmentation (CA)<n>We propose principled modifications to the distillation process, such as decoupling the noise schedules for the engine and the regularizer, leading to further performance gains.
arXiv Detail & Related papers (2025-11-27T18:24:28Z) - Did Models Sufficient Learn? Attribution-Guided Training via Subset-Selected Counterfactual Augmentation [61.248535801314375]
Subset-Selected Counterfactual Augmentation (SS-CA)<n>We develop Counterfactual LIMA to identify minimal spatial region sets whose removal can selectively alter model predictions.<n>Experiments show that SS-CA improves generalization on in-distribution (ID) test data and achieves superior performance on out-of-distribution (OOD) benchmarks.
arXiv Detail & Related papers (2025-11-15T08:39:22Z) - Denoising Score Distillation: From Noisy Diffusion Pretraining to One-Step High-Quality Generation [82.39763984380625]
We introduce denoising score distillation (DSD), a surprisingly effective and novel approach for training high-quality generative models from low-quality data.<n>DSD pretrains a diffusion model exclusively on noisy, corrupted samples and then distills it into a one-step generator capable of producing refined, clean outputs.
arXiv Detail & Related papers (2025-03-10T17:44:46Z) - Fairness-Aware Low-Rank Adaptation Under Demographic Privacy Constraints [4.647881572951815]
Pre-trained foundation models can be adapted for specific tasks using Low-Rank Adaptation (LoRA)<n>Existing fairness-aware fine-tuning methods rely on direct access to sensitive attributes or their predictors.<n>We introduce a set of LoRA-based fine-tuning methods that can be trained in a distributed fashion.
arXiv Detail & Related papers (2025-03-07T18:49:57Z) - BMFT: Achieving Fairness via Bias-based Weight Masking Fine-tuning [17.857930204697983]
Bias-based Weight Masking Fine-Tuning (BMFT) is a novel post-processing method that enhances the fairness of a trained model in significantly fewer epochs.
BMFT produces a mask over model parameters, which efficiently identifies the weights contributing the most towards biased predictions.
Experiments across four dermatological datasets and two sensitive attributes demonstrate that BMFT outperforms existing state-of-the-art (SOTA) techniques in both diagnostic accuracy and fairness metrics.
arXiv Detail & Related papers (2024-08-13T13:36:48Z) - Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion Models [73.88009808326387]
We propose a novel spectrum-aware adaptation framework for generative models.
Our method adjusts both singular values and their basis vectors of pretrained weights.
We introduce Spectral Ortho Decomposition Adaptation (SODA), which balances computational efficiency and representation capacity.
arXiv Detail & Related papers (2024-05-31T17:43:35Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - Toward Fair Facial Expression Recognition with Improved Distribution
Alignment [19.442685015494316]
We present a novel approach to mitigate bias in facial expression recognition (FER) models.
Our method aims to reduce sensitive attribute information such as gender, age, or race, in the embeddings produced by FER models.
For the first time, we analyze the notion of attractiveness as an important sensitive attribute in FER models and demonstrate that FER models can indeed exhibit biases towards more attractive faces.
arXiv Detail & Related papers (2023-06-11T14:59:20Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Counterfactual fairness: removing direct effects through regularization [0.0]
We propose a new definition of fairness that incorporates causality through the Controlled Direct Effect (CDE)
We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition.
Our results were found to mitigate unfairness from the predictions with small reductions in model performance.
arXiv Detail & Related papers (2020-02-25T10:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.