Honey Trap or Romantic Utopia: A Case Study of Final Fantasy XIV Players PII Disclosure in Intimate Partner-Seeking Posts
- URL: http://arxiv.org/abs/2503.09832v1
- Date: Wed, 12 Mar 2025 20:53:06 GMT
- Title: Honey Trap or Romantic Utopia: A Case Study of Final Fantasy XIV Players PII Disclosure in Intimate Partner-Seeking Posts
- Authors: Yihao Zhou, Tanusree Sharma,
- Abstract summary: We conducted a case study on Final Fantasy XIV (FFXIV) players intimate partner seeking posts on social media.<n>Our findings reveal that players disclose sensitive personal information and share vulnerabilities to establish trust.<n>We propose design implications for reducing privacy and safety risks and fostering healthier social interactions in virtual worlds.
- Score: 2.7624021966289605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Massively multiplayer online games (MMOGs) can foster social interaction and relationship formation, but they pose specific privacy and safety challenges, especially in the context of mediating intimate interpersonal connections. To explore the potential risks, we conducted a case study on Final Fantasy XIV (FFXIV) players intimate partner seeking posts on social media. We analyzed 1,288 posts from a public Weibo account using Latent Dirichlet Allocation (LDA) topic modeling and thematic analysis. Our findings reveal that players disclose sensitive personal information and share vulnerabilities to establish trust but face difficulties in managing identity and privacy across multiple platforms. We also found that players expectations regarding intimate partner are diversified, and mismatch of expectations may leads to issues like privacy leakage or emotional exploitation. Based on our findings, we propose design implications for reducing privacy and safety risks and fostering healthier social interactions in virtual worlds.
Related papers
- Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy [10.420270891113566]
COVID-19 pandemic has led to a significant surge in cases of online grooming.<n>Previous efforts to detect grooming in industry and academia have involved accessing and monitoring private conversations.<n>We implement a privacy-preserving pipeline for the early detection of sexual predators.
arXiv Detail & Related papers (2025-01-21T23:01:21Z) - Multi-P$^2$A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models [65.2761254581209]
We evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source Large Vision-Language Models (LVLMs)
Based on Multi-P$2$A, we evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source LVLMs.
Our results reveal that current LVLMs generally pose a high risk of facilitating privacy breaches.
arXiv Detail & Related papers (2024-12-27T07:33:39Z) - Inside Out or Not: Privacy Implications of Emotional Disclosure [6.667345087444936]
We investigate the role of emotions in driving individuals' information sharing behaviour, particularly in relation to urban locations and social ties.
We adopt a novel methodology that integrates location and time, emotion, and personal information sharing behaviour.
Our findings reveal that self-reported emotions influence personal information-sharing behaviour with distant social groups, while neutral emotions lead individuals to share less precise information with close social circles.
arXiv Detail & Related papers (2024-09-18T08:42:45Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - The Illusion of Anonymity: Uncovering the Impact of User Actions on Privacy in Web3 Social Ecosystems [11.501563549824466]
We investigate the nuanced dynamics between user engagement on Web3 social platforms and the consequent privacy concerns.
We scrutinize the widespread phenomenon of fabricated activities, which encompasses the establishment of bogus accounts aimed at mimicking popularity.
We highlight the urgent need for more stringent privacy measures and ethical protocols to navigate the complex web of social exchanges.
arXiv Detail & Related papers (2024-05-22T06:26:15Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Privacy Preservation in Artificial Intelligence and Extended Reality
(AI-XR) Metaverses: A Survey [3.0151762748441624]
The metaverse envisions a virtual universe where individuals can interact, create, and participate in a wide range of activities.
Privacy in the metaverse is a critical concern as the concept evolves and immersive virtual experiences become more prevalent.
We explore various privacy challenges that future metaverses are expected to face, given their reliance on AI for tracking users.
arXiv Detail & Related papers (2023-09-19T11:56:12Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy threats in intimate relationships [0.720851507101878]
This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships.
We survey a range of intimate relationships and describe their common features.
Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.
arXiv Detail & Related papers (2020-06-06T16:21:14Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.