User Perceptions and Attitudes Toward Untraceability in Messaging Platforms
- URL: http://arxiv.org/abs/2506.11212v2
- Date: Sun, 14 Sep 2025 14:14:32 GMT
- Title: User Perceptions and Attitudes Toward Untraceability in Messaging Platforms
- Authors: Carla F. Griggio, Boel Nelson, Zefan Sramek, Aslan Askarov,
- Abstract summary: "Untraceability" is preventing third parties from tracing who communicates with whom.<n>This paper explores user perceptions of and attitudes toward "untraceability"<n>We identify a diverse set of features that users perceive to be useful for untraceable messaging.
- Score: 3.87707864695882
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mainstream messaging platforms offer a variety of features designed to enhance user privacy, such as password-protected chats and end-to-end encryption, which primarily protect message contents. Beyond contents, a lot can be inferred about people simply by tracing who sends and receives messages, when, and how often. This paper explores user perceptions of and attitudes toward "untraceability", defined as preventing third parties from tracing who communicates with whom, to inform the design of privacy-enhancing technologies and untraceable communication protocols. Through a vignette-based qualitative study with 189 participants, we identify a diverse set of features that users perceive to be useful for untraceable messaging, ranging from using aliases instead of real names to VPNs. Through a reflexive thematic analysis, we uncover three overarching attitudes that influence the support or rejection of untraceability in messaging platforms and that can serve as a set of new privacy personas: privacy fundamentalists, who advocate for privacy as a universal right; safety fundamentalists, who support surveillance for the sake of accountability; and optimists, who advocate for privacy in principle but also endorse exceptions in idealistic ways, such as encryption backdoors. We highlight a critical gap between the threat models assumed by users and those addressed by untraceable communication protocols. Many participants understood untraceability as a form of anonymity, but interpret it as senders and receivers hiding their identities from each other, rather than from external network observers. We discuss implications for design of strategic communication and user interfaces of untraceable messaging protocols, and propose framing untraceability as a form of "altruistic privacy", i.e., adopting privacy-enhancing technologies to protect others, as a promising strategy to foster broad adoption.
Related papers
- Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - A Provably Secure Network Protocol for Private Communication with Analysis and Tracing Resistance [24.74468505942983]
This paper proposes a novel decentralized anonymous routing protocol with resistance to tracing and traffic analysis.<n>It rigorously proves indistinguishable identity privacy for users even in highly adversarial environments.<n>The proposed protocol offers a provably secure solution for privacy-preserving communication in digital environments.
arXiv Detail & Related papers (2025-08-03T10:50:04Z) - Synopsis: Secure and private trend inference from encrypted semantic embeddings [2.7998963147546148]
We introduce Synopsis, a secure architecture for analyzing messaging trends in consensually-donated E2EE messages using message embeddings.<n>Since the goal of this system is investigative journalism, Synopsis must facilitate both exploratory and targeted analyses.<n> Evaluations on a dataset of Hindi-language WhatsApp messages demonstrate the efficiency and accuracy of our approach.
arXiv Detail & Related papers (2025-05-29T17:34:10Z) - Understanding Users' Security and Privacy Concerns and Attitudes Towards Conversational AI Platforms [3.789219860006095]
We conduct a large-scale analysis of over 2.5M user posts from the r/ChatGPT Reddit community to understand users' security and privacy concerns.<n>We find that users are concerned about each stage of the data lifecycle (i.e., collection, usage, and retention)<n>We provide recommendations for users, platforms, enterprises, and policymakers to enhance transparency, improve data controls, and increase user trust and adoption.
arXiv Detail & Related papers (2025-04-09T03:22:48Z) - Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents [33.26308626066122]
We characterize the notion of contextual privacy for user interactions with Conversational Agents (LCAs)<n>It aims to minimize privacy risks by ensuring that users (sender) disclose only information that is both relevant and necessary for achieving their intended goals.<n>We propose a locally deployable framework that operates between users and LCAs, identifying and reformulating out-of-context information in user prompts.
arXiv Detail & Related papers (2025-02-22T09:05:39Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.<n>We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.<n>State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - NAP^2: A Benchmark for Naturalness and Privacy-Preserving Text Rewriting by Learning from Human [56.46355425175232]
We suggest sanitizing sensitive text using two common strategies used by humans.<n>We curate the first corpus, coined NAP2, through both crowdsourcing and the use of large language models.<n>Compared to the prior works on anonymization, the human-inspired approaches result in more natural rewrites.
arXiv Detail & Related papers (2024-06-06T05:07:44Z) - Pudding: Private User Discovery in Anonymity Networks [9.474649136535705]
Pudding is a novel private user discovery protocol.
It hides contact relationships between users, prevents impersonation, and conceals which usernames are registered on the network.
Pudding can be deployed on Loopix and Nym without changes to the underlying anonymity network protocol.
arXiv Detail & Related papers (2023-11-17T19:06:08Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - One Protocol to Rule Them All? On Securing Interoperable Messaging [3.2213245974344673]
European lawmakers have ruled that users should be able to exchange messages with each other.
messaging interoperability opens up a Pandora's box of security and privacy challenges.
arXiv Detail & Related papers (2023-03-24T17:40:52Z) - User-Centered Security in Natural Language Processing [0.7106986689736825]
dissertation proposes a framework of user-centered security in Natural Language Processing (NLP)
It focuses on two security domains within NLP with great public interest.
arXiv Detail & Related papers (2023-01-10T22:34:19Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.