Causal Learning for Trustworthy Recommender Systems: A Survey
- URL: http://arxiv.org/abs/2402.08241v2
- Date: Tue, 18 Feb 2025 09:36:34 GMT
- Title: Causal Learning for Trustworthy Recommender Systems: A Survey
- Authors: Jin Li, Shoujin Wang, Qi Zhang, Longbing Cao, Fang Chen, Xiuzhen Zhang, Dietmar Jannach, Charu C. Aggarwal,
- Abstract summary: This paper presents an overview of Trustworthy RS (TRS) from the perspective of causal learning.
We identify potential trustworthiness challenges at each stage and link them to viable causal solutions.
- Score: 46.38233864851038
- License:
- Abstract: Recommender Systems (RS) have significantly advanced online content filtering and personalized decision-making. However, emerging vulnerabilities in RS have catalyzed a paradigm shift towards Trustworthy RS (TRS). Despite substantial progress on TRS, most efforts focus on data correlations while overlooking the fundamental causal nature of recommendations. This drawback hinders TRS from identifying the root cause of trustworthiness issues, leading to limited fairness, robustness, and explainability. To bridge this gap, causal learning emerges as a class of promising methods to augment TRS. These methods, grounded in reliable causality, excel in mitigating various biases and noise while offering insightful explanations for TRS. However, there is a lack of timely and dedicated surveys in this vibrant area. This paper creates an overview of TRS from the perspective of causal learning. We begin by presenting the advantages and common procedures of Causality-oriented TRS (CTRS). Then, we identify potential trustworthiness challenges at each stage and link them to viable causal solutions, followed by a classification of CTRS methods. Finally, we discuss several future directions for advancing this field.
Related papers
- Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey [92.36487127683053]
Retrieval-Augmented Generation (RAG) is an advanced technique designed to address the challenges of Artificial Intelligence-Generated Content (AIGC)
RAG provides reliable and up-to-date external knowledge, reduces hallucinations, and ensures relevant context across a wide range of tasks.
Despite RAG's success and potential, recent studies have shown that the RAG paradigm also introduces new risks, including privacy concerns, adversarial attacks, and accountability issues.
arXiv Detail & Related papers (2025-02-08T06:50:47Z) - RbFT: Robust Fine-tuning for Retrieval-Augmented Generation against Retrieval Defects [12.5122702720856]
We propose Robust Fine-Tuning (RbFT) to enhance the resilience of large language models against retrieval defects.
Experimental results demonstrate that RbFT significantly improves the robustness of RAG systems across diverse retrieval conditions.
arXiv Detail & Related papers (2025-01-30T14:15:09Z) - ReARTeR: Retrieval-Augmented Reasoning with Trustworthy Process Rewarding [25.329712997545794]
We propose Retrieval-Augmented Reasoning through Trustworthy Process Rewarding (ReARTeR)
ReARTeR enhances RAG systems' reasoning capabilities through post-training and test-time scaling.
Experimental results on multi-step reasoning benchmarks demonstrate significant improvements.
arXiv Detail & Related papers (2025-01-14T05:56:26Z) - Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework [77.45983464131977]
We focus on how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications.
Our research identifies two critical latent factors affecting RAG's confidence in its predictions.
We develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers.
arXiv Detail & Related papers (2024-09-24T14:52:14Z) - Towards Robust Uncertainty-Aware Incomplete Multi-View Classification [11.617211995206018]
We propose the Alternating Progressive Learning Network (APLN) to enhance EDL-based methods in incomplete MVC scenarios.
APLN mitigates bias from corrupted observed data by first applying coarse imputation, followed by mapping the data to a latent space.
We also introduce a conflict-aware Dempster-Shafer combination rule (DSCR) to better handle conflicting evidence.
arXiv Detail & Related papers (2024-09-10T07:18:57Z) - CoT Rerailer: Enhancing the Reliability of Large Language Models in Complex Reasoning Tasks through Error Detection and Correction [9.44858963874474]
Chain-of-Thought (CoT) prompting enhances Large Language Models (LLMs) complex reasoning abilities.
We propose the CoT Rerailer to address these challenges, employing self-consistency and multi-agent debate systems.
We demonstrate the effectiveness of our approach across diverse question-answering datasets in various knowledge domains.
arXiv Detail & Related papers (2024-08-25T21:20:17Z) - A Thorough Performance Benchmarking on Lightweight Embedding-based Recommender Systems [67.52782366565658]
State-of-the-art recommender systems (RSs) depend on categorical features, which ecoded by embedding vectors, resulting in excessively large embedding tables.
Despite the prosperity of lightweight embedding-based RSs, a wide diversity is seen in evaluation protocols.
This study investigates various LERS' performance, efficiency, and cross-task transferability via a thorough benchmarking process.
arXiv Detail & Related papers (2024-06-25T07:45:00Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Trustworthy Recommender Systems [27.27498627500375]
Recommender systems (RSs) aim to help users to effectively retrieve items of their interests from a large catalogue.
Recent years have witnessed an increasing number of threats to RSs, coming from attacks, system and user generated noise, system bias.
For end users, a trustworthy RS (TRS) should not only be accurate, but also transparent, unbiased and fair as well as robust to noise or attacks.
arXiv Detail & Related papers (2022-08-10T08:28:46Z) - Open Problem: Tight Online Confidence Intervals for RKHS Elements [57.363123214464764]
We formalize the question of online confidence intervals in the RKHS setting and overview the existing results.
It is unclear whether the suboptimal regret bound is a fundamental shortcoming of these algorithms or an artifact of the proof.
arXiv Detail & Related papers (2021-10-28T22:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.