Structured Conformal Inference for Matrix Completion with Applications to Group Recommender Systems
- URL: http://arxiv.org/abs/2404.17561v2
- Date: Mon, 10 Feb 2025 07:55:42 GMT
- Title: Structured Conformal Inference for Matrix Completion with Applications to Group Recommender Systems
- Authors: Ziyi Liang, Tianmin Xie, Xin Tong, Matteo Sesia,
- Abstract summary: We develop a conformal inference method to construct a joint confidence region for a given group of missing entries within a sparsely observed matrix.
Our method is model-agnostic and can be combined with any black-box'' matrix completion algorithm to provide reliable uncertainty estimation for group-level recommendations.
- Score: 16.519348575982004
- License:
- Abstract: We develop a conformal inference method to construct a joint confidence region for a given group of missing entries within a sparsely observed matrix, focusing primarily on entries from the same column. Our method is model-agnostic and can be combined with any ``black-box'' matrix completion algorithm to provide reliable uncertainty estimation for group-level recommendations. For example, in the context of movie recommendations, it is useful to quantify the uncertainty in the ratings assigned by all members of a group to the same movie, enabling more informed decision-making when individual preferences may conflict. Unlike existing conformal techniques, which estimate uncertainty for one individual at a time, our method provides stronger group-level guarantees by assembling a structured calibration dataset that mimics the dependencies expected in the test group. To achieve this, we introduce a generalized weighted conformalization framework that addresses the lack of exchangeability arising from structured calibration, introducing several innovations to overcome associated computational challenges. We demonstrate the practicality and effectiveness of our approach through extensive numerical experiments and an analysis of the MovieLens 100K dataset.
Related papers
- Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Robust Consensus Clustering and its Applications for Advertising
Forecasting [18.242055675730253]
We propose a novel algorithm -- robust consensus clustering that can find common ground truth among experts' opinions.
We apply the proposed method to the real-world advertising campaign segmentation and forecasting tasks.
arXiv Detail & Related papers (2022-12-27T21:49:04Z) - Optimal Clustering with Bandit Feedback [57.672609011609886]
This paper considers the problem of online clustering with bandit feedback.
It includes a novel stopping rule for sequential testing that circumvents the need to solve any NP-hard weighted clustering problem as its subroutines.
We show through extensive simulations on synthetic and real-world datasets that BOC's performance matches the lower boundally, and significantly outperforms a non-adaptive baseline algorithm.
arXiv Detail & Related papers (2022-02-09T06:05:05Z) - Personalized Federated Learning via Convex Clustering [72.15857783681658]
We propose a family of algorithms for personalized federated learning with locally convex user costs.
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized.
arXiv Detail & Related papers (2022-02-01T19:25:31Z) - Mixed-Integer Optimization with Constraint Learning [4.462264781248437]
We establish a broad methodological foundation for mixed-integer optimization with learned constraints.
We exploit the mixed-integer optimization-representability of many machine learning methods.
We demonstrate the method in both World Food Programme planning and chemotherapy optimization.
arXiv Detail & Related papers (2021-11-04T20:19:55Z) - Exclusive Group Lasso for Structured Variable Selection [10.86544864007391]
A structured variable selection problem is considered.
A composite norm can be properly designed to promote such exclusive group sparsity patterns.
An active set algorithm is proposed that builds the solution by including structure atoms into the estimated support.
arXiv Detail & Related papers (2021-08-23T16:55:13Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Robust Grouped Variable Selection Using Distributionally Robust
Optimization [11.383869751239166]
We propose a Distributionally Robust Optimization (DRO) formulation with a Wasserstein-based uncertainty set for selecting grouped variables under perturbations.
We prove probabilistic bounds on the out-of-sample loss and the estimation bias, and establish the grouping effect of our estimator.
We show that our formulation produces an interpretable and parsimonious model that encourages sparsity at a group level.
arXiv Detail & Related papers (2020-06-10T22:32:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.