DNS Query Forgery: A Client-Side Defense Against Mobile App Traffic Profiling
- URL: http://arxiv.org/abs/2505.09374v1
- Date: Wed, 14 May 2025 13:25:19 GMT
- Title: DNS Query Forgery: A Client-Side Defense Against Mobile App Traffic Profiling
- Authors: Andrea Jimenez-Berenguel, César Gil, Carlos Garcia-Rubio, Jordi Forné, Celeste Campo,
- Abstract summary: Mobile applications generate DNS queries that can reveal user behavioral patterns even when communications are encrypted.<n>This paper presents a privacy enhancement framework based on query forgery to protect users against profiling attempts.
- Score: 0.7619637511583491
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mobile applications continuously generate DNS queries that can reveal sensitive user behavioral patterns even when communications are encrypted. This paper presents a privacy enhancement framework based on query forgery to protect users against profiling attempts that leverage these background communications. We first mathematically model user profiles as probability distributions over interest categories derived from mobile application traffic. We then evaluate three query forgery strategies -- uniform sampling, TrackMeNot-based generation, and an optimized approach that minimizes Kullback-Leibler divergence -- to quantify their effectiveness in obfuscating user profiles. Then we create a synthetic dataset comprising 1,000 user traces constructed from real mobile application traffic and we extract the user profiles based on DNS traffic. Our evaluation reveals that a 50\% privacy improvement is achievable with less than 20\% traffic overhead when using our approach, while achieving 100\% privacy protection requires approximately 40-60\% additional traffic. We further propose a modular system architecture for practical implementation of our protection mechanisms on mobile devices. This work offers a client-side privacy solution that operates without third-party trust requirements, empowering individual users to defend against traffic analysis without compromising application functionality.
Related papers
- Automated Profile Inference with Language Model Agents [67.32226960040514]
We study a new threat that LLMs pose to online pseudonymity, called automated profile inference.<n>An adversary can instruct LLMs to automatically scrape and extract sensitive personal attributes from publicly visible user activities on pseudonymous platforms.<n>We introduce an automated profiling framework called AutoProfiler to assess the feasibility of such threats in real-world scenarios.
arXiv Detail & Related papers (2025-05-18T13:05:17Z) - Optimal Transport-Guided Source-Free Adaptation for Face Anti-Spoofing [58.56017169759816]
We introduce a novel method in which the face anti-spoofing model can be adapted by the client itself to a target domain at test time.<n>Specifically, we develop a prototype-based base model and an optimal transport-guided adaptor.<n>In cross-domain and cross-attack settings, compared with recent methods, our method achieves average relative improvements of 19.17% in HTER and 8.58% in AUC.
arXiv Detail & Related papers (2025-03-29T06:10:34Z) - Hiding in Plain Sight: An IoT Traffic Camouflage Framework for Enhanced Privacy [2.0257616108612373]
Existing single-technique obfuscation methods, such as packet padding, often fall short in dynamic environments like smart homes.<n>This paper introduces a multi-technique obfuscation framework designed to enhance privacy by disrupting traffic analysis.
arXiv Detail & Related papers (2025-01-26T04:33:44Z) - Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach [49.63614966954833]
Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy.<n>Existing FedCF methods typically combine distributed Collaborative Filtering (CF) algorithms with privacy-preserving mechanisms, and then preserve personalized information into a user embedding vector.<n>This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
arXiv Detail & Related papers (2024-08-16T05:49:14Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - On the Robustness of Topics API to a Re-Identification Attack [6.157783777246449]
Google proposed the Topics API framework as a privacy-friendly alternative for behavioural advertising.
This paper evaluates the robustness of the Topics API to a re-identification attack.
We find that the Topics API mitigates but cannot prevent re-identification to take place, as there is a sizeable chance that a user's profile is unique within a website's audience.
arXiv Detail & Related papers (2023-06-08T10:53:48Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z) - mPSAuth: Privacy-Preserving and Scalable Authentication for Mobile Web
Applications [0.0]
mPSAuth is an approach for continuously tracking various data sources reflecting user behavior and estimating the likelihood of the current user being legitimate.
We show that mPSAuth can provide high accuracy with low encryption and communication overhead, while the effort for the inference is increased to a tolerable extent.
arXiv Detail & Related papers (2022-10-07T12:49:34Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - Masked LARk: Masked Learning, Aggregation and Reporting worKflow [6.484847460164177]
Many web advertising data flows involve passive cross-site tracking of users.
Most browsers are moving towards removal of 3PC in subsequent browser iterations.
We propose a new proposal, called Masked LARk, for aggregation of user engagement measurement and model training.
arXiv Detail & Related papers (2021-10-27T21:59:37Z) - Adaptive Webpage Fingerprinting from TLS Traces [13.009834690757614]
In webpage fingerprinting, an adversary infers the specific webpage loaded by a victim user by analysing the patterns in the encrypted TLS traffic exchanged between the user's browser and the website's servers.
This work studies modern webpage fingerprinting adversaries against the TLS protocol.
We introduce a TLS-specific model that: 1) scales to an unprecedented number of target webpages, 2) can accurately classify thousands of classes it never encountered during training, and 3) has low operational costs even in scenarios of frequent page updates.
arXiv Detail & Related papers (2020-10-19T15:13:07Z) - A Privacy-Preserving-Oriented DNN Pruning and Mobile Acceleration
Framework [56.57225686288006]
Weight pruning of deep neural networks (DNNs) has been proposed to satisfy the limited storage and computing capability of mobile edge devices.
Previous pruning methods mainly focus on reducing the model size and/or improving performance without considering the privacy of user data.
We propose a privacy-preserving-oriented pruning and mobile acceleration framework that does not require the private training dataset.
arXiv Detail & Related papers (2020-03-13T23:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.