Privacy Risks of LLM-Empowered Recommender Systems: An Inversion Attack Perspective
- URL: http://arxiv.org/abs/2508.03703v2
- Date: Fri, 12 Sep 2025 02:59:56 GMT
- Title: Privacy Risks of LLM-Empowered Recommender Systems: An Inversion Attack Perspective
- Authors: Yubo Wang, Min Tang, Nuo Shen, Shujie Cui, Weiqing Wang,
- Abstract summary: Large language model (LLM) powered recommendation paradigm has been proposed to address the limitations of traditional recommender systems.<n>This study uncovers that LLM empowered recommender systems are vulnerable to reconstruction attacks that can expose both system and user privacy.
- Score: 8.243745783644359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The large language model (LLM) powered recommendation paradigm has been proposed to address the limitations of traditional recommender systems, which often struggle to handle cold start users or items with new IDs. Despite its effectiveness, this study uncovers that LLM empowered recommender systems are vulnerable to reconstruction attacks that can expose both system and user privacy. To examine this threat, we present the first systematic study on inversion attacks targeting LLM empowered recommender systems, where adversaries attempt to reconstruct original prompts that contain personal preferences, interaction histories, and demographic attributes by exploiting the output logits of recommendation models. We reproduce the vec2text framework and optimize it using our proposed method called Similarity Guided Refinement, enabling more accurate reconstruction of textual prompts from model generated logits. Extensive experiments across two domains (movies and books) and two representative LLM based recommendation models demonstrate that our method achieves high fidelity reconstructions. Specifically, we can recover nearly 65 percent of the user interacted items and correctly infer age and gender in 87 percent of the cases. The experiments also reveal that privacy leakage is largely insensitive to the victim model's performance but highly dependent on domain consistency and prompt complexity. These findings expose critical privacy vulnerabilities in LLM empowered recommender systems.
Related papers
- LLM4MEA: Data-free Model Extraction Attacks on Sequential Recommenders via Large Language Models [50.794651919028965]
Recent studies have demonstrated the vulnerability of sequential recommender systems to Model Extraction Attacks (MEAs)<n>Black-box attacks in prior MEAs are ineffective at exposing recommender system vulnerabilities due to random sampling in data selection.<n>We propose LLM4MEA, a novel model extraction method that leverages Large Language Models (LLMs) as human-like rankers to generate data.
arXiv Detail & Related papers (2025-07-22T19:20:23Z) - Retrieval-Augmented Purifier for Robust LLM-Empowered Recommendation [15.098844020816552]
Large Language Model (LLM)-empowered recommender systems have revolutionized personalized recommendation frameworks.<n>Existing LLM-empowered RecSys have been demonstrated to be highly vulnerable to minor perturbations.<n>We propose a novel framework (RETURN) by retrieving external collaborative signals to purify the poisoned user profiles.
arXiv Detail & Related papers (2025-04-03T10:22:30Z) - Mitigating Propensity Bias of Large Language Models for Recommender Systems [20.823461673845756]
We introduce a novel framework named Counterfactual LLM Recommendation (CLLMR)<n>We propose a spectrum-based side information encoder that implicitly embeds structural information from historical interactions into the side information representation.<n>Our CLLMR approach explores the causal relationships inherent in LLM-based recommender systems.
arXiv Detail & Related papers (2024-09-30T07:57:13Z) - LLMEmb: Large Language Model Can Be a Good Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the ability to capture semantic relationships between items, independent of their popularity.<n>We introduce LLMEmb, a novel method leveraging LLM to generate item embeddings that enhance Sequential Recommender Systems (SRS) performance.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers [29.739736497044664]
We present a training-free approach for optimizing generative recommenders.
We propose a generative explore-exploit method that can not only exploit generated items with high engagement, but also actively explore and discover hidden population preferences.
arXiv Detail & Related papers (2024-06-07T20:41:59Z) - Stealthy Attack on Large Language Model based Recommendation [24.51398285321322]
Large language models (LLMs) have been instrumental in propelling the progress of recommender systems (RS)
In this work, we reveal that the introduction of LLMs into recommendation models presents new security vulnerabilities due to their emphasis on the textual content of items.
We demonstrate that attackers can significantly boost an item's exposure by merely altering its textual content during the testing phase.
arXiv Detail & Related papers (2024-02-18T16:51:02Z) - Mirror Gradient: Towards Robust Multimodal Recommender Systems via
Exploring Flat Local Minima [54.06000767038741]
We analyze multimodal recommender systems from the novel perspective of flat local minima.
We propose a concise yet effective gradient strategy called Mirror Gradient (MG)
We find that the proposed MG can complement existing robust training methods and be easily extended to diverse advanced recommendation models.
arXiv Detail & Related papers (2024-02-17T12:27:30Z) - LoRec: Large Language Model for Robust Sequential Recommendation against Poisoning Attacks [60.719158008403376]
Our research focuses on the capabilities of Large Language Models (LLMs) in the detection of unknown fraudulent activities within recommender systems.<n>We propose LoRec, an advanced framework that employs LLM-Enhanced to strengthen the robustness of sequential recommender systems.<n>Our comprehensive experiments validate that LoRec, as a general framework, significantly strengthens the robustness of sequential recommender systems.
arXiv Detail & Related papers (2024-01-31T10:35:53Z) - Empowering Few-Shot Recommender Systems with Large Language Models --
Enhanced Representations [0.0]
Large language models (LLMs) offer novel insights into tackling the few-shot scenarios encountered by explicit feedback-based recommender systems.
Our study can inspire researchers to delve deeper into the multifaceted dimensions of LLMs's involvement in recommender systems.
arXiv Detail & Related papers (2023-12-21T03:50:09Z) - ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks [91.55895047448249]
This paper presents ReEval, an LLM-based framework using prompt chaining to perturb the original evidence for generating new test cases.
We implement ReEval using ChatGPT and evaluate the resulting variants of two popular open-domain QA datasets.
Our generated data is human-readable and useful to trigger hallucination in large language models.
arXiv Detail & Related papers (2023-10-19T06:37:32Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large
Language Model Recommendation [52.62492168507781]
We propose a novel benchmark called Fairness of Recommendation via LLM (FaiRLLM)
This benchmark comprises carefully crafted metrics and a dataset that accounts for eight sensitive attributes.
By utilizing our FaiRLLM benchmark, we conducted an evaluation of ChatGPT and discovered that it still exhibits unfairness to some sensitive attributes when generating recommendations.
arXiv Detail & Related papers (2023-05-12T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.