When Collaborative Filtering is not Collaborative: Unfairness of PCA for Recommendations
- URL: http://arxiv.org/abs/2310.09687v2
- Date: Wed, 28 May 2025 20:01:13 GMT
- Title: When Collaborative Filtering is not Collaborative: Unfairness of PCA for Recommendations
- Authors: David Liu, Jackie Baek, Tina Eliassi-Rad,
- Abstract summary: We focus on the fundamental method of principal component analysis (PCA)<n>We identify two underlying popularity mechanisms that induce item unfairness in PCA.<n>We develop an algorithm, Item-Weighted PCA, that flexibly up-weights less popular items when optimizing for leading principal components.
- Score: 6.41560541899312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the fairness of dimensionality reduction methods for recommendations. We focus on the fundamental method of principal component analysis (PCA), which identifies latent components and produces a low-rank approximation via the leading components while discarding the trailing components. Prior works have defined notions of "fair PCA"; however, these definitions do not answer the following question: why is PCA unfair? We identify two underlying popularity mechanisms that induce item unfairness in PCA. The first negatively impacts less popular items because less popular items rely on trailing latent components to recover their values. The second negatively impacts highly popular items, since the leading PCA components specialize in individual popular items instead of capturing similarities between items. To address these issues, we develop a polynomial-time algorithm, Item-Weighted PCA, that flexibly up-weights less popular items when optimizing for leading principal components. We theoretically show that PCA, in all cases, and Normalized PCA, in cases of block-diagonal matrices, are instances of Item-Weighted PCA. We empirically show that there exist datasets for which Item-Weighted PCA yields the optimal solution while the baselines do not. In contrast to past dimensionality reduction re-weighting techniques, Item-Weighted PCA solves a convex optimization problem and enforces a hard rank constraint. Our evaluations on real-world datasets show that Item-Weighted PCA not only mitigates both unfairness mechanisms, but also produces recommendations that outperform those of PCA baselines.
Related papers
- Fair PCA, One Component at a Time [2.5056643038238504]
The Min-Max Fair PCA problem seeks a low-rank representation of multi-group data that is as balanced as possible across groups.<n>Existing approaches to this problem return a rank-$d$ fair subspace, but lack the fundamental containment property of standard PCA.<n>We define fair principal components as directions that minimize the maximum group-wise reconstruction error.<n>This approach preserves the containment property of standard PCA, and reduces to standard pca for data with a single group.
arXiv Detail & Related papers (2025-03-27T14:47:27Z) - Project-Probe-Aggregate: Efficient Fine-Tuning for Group Robustness [53.96714099151378]
We propose a three-step approach for parameter-efficient fine-tuning of image-text foundation models.
Our method improves its two key components: minority samples identification and the robust training algorithm.
Our theoretical analysis shows that our PPA enhances minority group identification and is Bayes optimal for minimizing the balanced group error.
arXiv Detail & Related papers (2025-03-12T15:46:12Z) - PIPA: Preference Alignment as Prior-Informed Statistical Estimation [57.24096291517857]
We introduce Pior-Informed Preference Alignment (PIPA), a unified, RL-free probabilistic framework.<n> PIPA accommodates both paired and unpaired data, as well as answer and step-level annotations.<n>By integrating different types of prior information, we developed two variations of PIPA: PIPA-M and PIPA-N.
arXiv Detail & Related papers (2025-02-09T04:31:30Z) - Sparse PCA with Oracle Property [115.72363972222622]
We propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations.
We prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA.
arXiv Detail & Related papers (2023-12-28T02:52:54Z) - Machine Learning for the Multi-Dimensional Bin Packing Problem:
Literature Review and Empirical Evaluation [52.560375022430236]
Bin Packing Problem (BPP) is a well-established optimization (CO) problem.
In this article, we first formulate BPP, introducing its variants and practical constraints.
Then, a comprehensive survey on machine learning for multi-dimensional BPP is provided.
arXiv Detail & Related papers (2023-12-13T12:39:25Z) - Off-Policy Evaluation for Large Action Spaces via Policy Convolution [60.6953713877886]
Policy Convolution family of estimators uses latent structure within actions to strategically convolve the logging and target policies.
Experiments on synthetic and benchmark datasets demonstrate remarkable mean squared error (MSE) improvements when using PC.
arXiv Detail & Related papers (2023-10-24T01:00:01Z) - Efficient fair PCA for fair representation learning [21.990310743597174]
We propose a conceptually simple approach that allows for an analytic solution similar to standard PCA and can be kernelized.
Our methods have the same complexity as standard PCA, or kernel PCA, and run much faster than existing methods for fair PCA based on semidefinite programming or manifold optimization.
arXiv Detail & Related papers (2023-02-26T13:34:43Z) - A novel approach for Fair Principal Component Analysis based on
eigendecomposition [10.203602318836444]
We propose a novel PCA algorithm which tackles fairness issues by means of a simple strategy comprising a one-dimensional search.
Our findings are consistent in several real situations as well as in scenarios with both unbalanced and balanced datasets.
arXiv Detail & Related papers (2022-08-24T08:20:16Z) - Sparse PCA on fixed-rank matrices [0.05076419064097732]
We show that, if the rank of the covariance matrix is a fixed value, then there is an algorithm that solves sparse PCA to global optimality.
We also prove a similar result for the version of sparse PCA which requires the principal components to have disjoint supports.
arXiv Detail & Related papers (2022-01-07T15:05:32Z) - AgFlow: Fast Model Selection of Penalized PCA via Implicit
Regularization Effects of Gradient Flow [64.81110234990888]
Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction.
In the High Dimension Low Sample Size (HDLSS) setting, one may prefer modified principal components, with penalized loadings.
We propose Approximated Gradient Flow (AgFlow) as a fast model selection method for penalized PCA.
arXiv Detail & Related papers (2021-10-07T08:57:46Z) - FAST-PCA: A Fast and Exact Algorithm for Distributed Principal Component
Analysis [12.91948651812873]
Principal Component Analysis (PCA) is a fundamental data preprocessing tool in the world of machine learning.
This paper proposes a distributed PCA algorithm called FAST-PCA (Fast and exAct diSTributed PCA)
arXiv Detail & Related papers (2021-08-27T16:10:59Z) - Rule-based Shielding for Partially Observable Monte-Carlo Planning [78.05638156687343]
We propose two contributions to Partially Observable Monte-Carlo Planning (POMCP)
The first is a method for identifying unexpected actions selected by POMCP with respect to expert prior knowledge of the task.
The second is a shielding approach that prevents POMCP from selecting unexpected actions.
We evaluate our approach on Tiger, a standard benchmark for POMDPs, and a real-world problem related to velocity regulation in mobile robot navigation.
arXiv Detail & Related papers (2021-04-28T14:23:38Z) - Enhanced Principal Component Analysis under A Collaborative-Robust
Framework [89.28334359066258]
We introduce a general collaborative-robust weight learning framework that combines weight learning and robust loss in a non-trivial way.
Under the proposed framework, only a part of well-fitting samples are activated which indicates more importance during training, and others, whose errors are large, will not be ignored.
In particular, the negative effects of inactivated samples are alleviated by the robust loss function.
arXiv Detail & Related papers (2021-03-22T15:17:37Z) - Evaluating Large-Vocabulary Object Detectors: The Devil is in the
Details [107.2722027807328]
We find that the default implementation of AP is neither category independent, nor does it directly reward properly calibrated detectors.
We show that the default implementation produces a gameable metric, where a simple, nonsensical re-ranking policy can improve AP by a large margin.
We benchmark recent advances in large-vocabulary detection and find that many reported gains do not translate to improvements under our new per-class independent evaluation.
arXiv Detail & Related papers (2021-02-01T18:56:02Z) - Improved Dimensionality Reduction of various Datasets using Novel
Multiplicative Factoring Principal Component Analysis (MPCA) [0.0]
We present an improvement to the traditional PCA approach called Multiplicative factoring Principal Component Analysis.
The advantage of MPCA over the traditional PCA is that a penalty is imposed on the occurrence space through a multiplier to make negligible the effect of outliers in seeking out projections.
arXiv Detail & Related papers (2020-09-25T12:30:15Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.