Contextual Preference Collaborative Measure Framework Based on Belief System
- URL: http://arxiv.org/abs/2503.24328v1
- Date: Mon, 31 Mar 2025 17:17:45 GMT
- Title: Contextual Preference Collaborative Measure Framework Based on Belief System
- Authors: Hang Yu, Wei Wei, Zheng Tan, Jing-lei Liu,
- Abstract summary: This article proposes a preference collaborative measure framework based on an updated belief system.<n>It is also capable of improving the accuracy and efficiency of preferen-ce measure algorithms.
- Score: 15.67367955162946
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To reduce the human intervention in the preference measure process,this article proposes a preference collaborative measure framework based on an updated belief system,which is also capable of improving the accuracy and efficiency of preferen-ce measure algorithms.Firstly,the distance of rules and the average internal distance of rulesets are proposed for specifying the relationship between the rules.For discovering the most representative preferences that are common in all users,namely common preference,a algorithm based on average internal distance of ruleset,PRA algorithm,is proposed,which aims to finish the discoveryprocess with minimum information loss rate.Furthermore,the concept of Common belief is proposed to update the belief system,and the common preferences are the evidences of updated belief system.Then,under the belief system,the proposed belief degree and deviation degree are used to determine whether a rule confirms the belief system or not and classify the preference rules into two kinds(generalized or personalized),and eventually filters out Top-K interesting rules relying on belief degree and deviation degree.Based on above,a scalable interestingness calculation framework that can apply various formulas is proposed for accurately calculating interestingness in different conditions.At last,IMCos algorithm and IMCov algorithm are proposed as exemplars to verify the accuracy and efficiency of the framework by using weighted cosine similarity and correlation coefficients as belief degree.In experiments,the proposed algorithms are compared to two state-of-the-art algorithms and the results show that IMCos and IMCov outperform than the other two in most aspects.
Related papers
- Preference-centric Bandits: Optimality of Mixtures and Regret-efficient Algorithms [34.876652087068734]
This paper introduces a framework for shifting from expectation-based evaluation to an alternative reward formulation, termed a preference metric (PM)
The PMs can place the desired emphasis on different reward realization and can encode a richer modeling of preferences that incorporate risk aversion, robustness, or other desired attitudes toward uncertainty.
The paper formalizes the PM-centric framework and presents two algorithm classes that learn and track mixtures in a regret-efficient fashion.
arXiv Detail & Related papers (2025-04-29T15:46:59Z) - Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - Revisiting Reciprocal Recommender Systems: Metrics, Formulation, and Method [60.364834418531366]
We propose five new evaluation metrics that comprehensively and accurately assess the performance of RRS.
We formulate the RRS from a causal perspective, formulating recommendations as bilateral interventions.
We introduce a reranking strategy to maximize matching outcomes, as measured by the proposed metrics.
arXiv Detail & Related papers (2024-08-19T07:21:02Z) - On the Maximal Local Disparity of Fairness-Aware Classifiers [35.98015221840018]
We propose a novel fairness metric called Maximal Cumulative ratio Disparity along varying Predictions' neighborhood (MCDP)
To accurately and efficiently calculate the MCDP, we develop a provably exact and an approximate calculation algorithm that greatly reduces the computational complexity with low estimation error.
arXiv Detail & Related papers (2024-06-05T13:35:48Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Rethinking Algorithmic Fairness for Human-AI Collaboration [29.334511328067777]
Existing approaches to algorithmic fairness aim to ensure equitable outcomes if human decision-makers comply perfectly with algorithms.
We show that it may be infeasible to design algorithmic recommendations that are simultaneously fair in isolation, compliance-robustly fair, and more accurate than the human policy.
arXiv Detail & Related papers (2023-10-05T16:21:42Z) - On Linear Convergence of PI Consensus Algorithm under the Restricted Secant Inequality [5.35599092568615]
This paper considers solving distributed optimization problems in peer-to-peer multi-agent networks.
By using the proportional-integral (PI) control strategy, various algorithms with fixed stepsize have been developed.
arXiv Detail & Related papers (2023-09-30T15:54:52Z) - Normalized mutual information is a biased measure for classification and community detection [0.4779196219827508]
We argue that results returned by the normalized mutual information are biased for two reasons.
We show that one's conclusions about which algorithm is best are significantly affected by the biases in the traditional mutual information.
arXiv Detail & Related papers (2023-07-03T18:12:32Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - Learning Optimal Fair Classification Trees: Trade-offs Between
Interpretability, Fairness, and Accuracy [7.215903549622416]
We propose a mixed integer optimization framework for learning optimal classification trees.
We benchmark our method against state-of-the-art approaches for fair classification on popular datasets.
Our method consistently finds decisions with almost full parity, while other methods rarely do.
arXiv Detail & Related papers (2022-01-24T19:47:10Z) - Average-Reward Off-Policy Policy Evaluation with Function Approximation [66.67075551933438]
We consider off-policy policy evaluation with function approximation in average-reward MDPs.
bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad.
We propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting.
arXiv Detail & Related papers (2021-01-08T00:43:04Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.