Security and Privacy Analysis of Tile's Location Tracking Protocol
- URL: http://arxiv.org/abs/2510.00350v1
- Date: Tue, 30 Sep 2025 23:25:59 GMT
- Title: Security and Privacy Analysis of Tile's Location Tracking Protocol
- Authors: Akshaya Kumar, Anna Raymaker, Michael Specter,
- Abstract summary: We conduct the first comprehensive security analysis of Tile, the second most popular crowd-sourced location-tracking service.<n>We identify several exploitable vulnerabilities and design flaws, disproving many of the platform's claimed security and privacy guarantees.<n>We examine Tile's accountability mechanism, a unique feature of independent interest.
- Score: 1.593065406609169
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We conduct the first comprehensive security analysis of Tile, the second most popular crowd-sourced location-tracking service behind Apple's AirTags. We identify several exploitable vulnerabilities and design flaws, disproving many of the platform's claimed security and privacy guarantees: Tile's servers can persistently learn the location of all users and tags, unprivileged adversaries can track users through Bluetooth advertisements emitted by Tile's devices, and Tile's anti-theft mode is easily subverted. Despite its wide deployment -- millions of users, devices, and purpose-built hardware tags -- Tile provides no formal description of its protocol or threat model. Worse, Tile intentionally weakens its antistalking features to support an antitheft use-case and relies on a novel "accountability" mechanism to punish those abusing the system to stalk victims. We examine Tile's accountability mechanism, a unique feature of independent interest; no other provider attempts to guarantee accountability. While an ideal accountability mechanism may disincentivize abuse in crowd-sourced location tracking protocols, we show that Tile's implementation is subvertible and introduces new exploitable vulnerabilities. We conclude with a discussion on the need for new, formal definitions of accountability in this setting.
Related papers
- Jailbreaking Leaves a Trace: Understanding and Detecting Jailbreak Attacks from Internal Representations of Large Language Models [2.6140509675507384]
We study jailbreaking from both security and interpretability perspectives.<n>We propose a tensor-based latent representation framework that captures structure in hidden activations.<n>Our results provide evidence that jailbreak behavior is rooted in identifiable internal structures.
arXiv Detail & Related papers (2026-02-12T02:43:17Z) - Okara: Detection and Attribution of TLS Man-in-the-Middle Vulnerabilities in Android Apps with Foundation Models [3.9807330903947378]
Transport Layer Security (TLS) is fundamental to secure online communication.<n>Man-in-the-Middle (MitM) attacks remain a pervasive threat in Android apps.<n>We present Okara, a framework that automates the detection and attribution of MitM Vulnerabilities.
arXiv Detail & Related papers (2026-01-30T09:49:09Z) - Beyond Model Jailbreak: Systematic Dissection of the "Ten DeadlySins" in Embodied Intelligence [36.972586142931256]
Embodied AI systems integrate language models with real world sensing, mobility, and cloud connected mobile apps.<n>We conduct the first holistic security analysis of the Unitree Go2 platform.<n>We uncover ten cross layer vulnerabilities the "Ten Sins of Embodied AI Security"
arXiv Detail & Related papers (2025-12-06T10:38:00Z) - The Trojan Knowledge: Bypassing Commercial LLM Guardrails via Harmless Prompt Weaving and Adaptive Tree Search [58.8834056209347]
Large language models (LLMs) remain vulnerable to jailbreak attacks that bypass safety guardrails to elicit harmful outputs.<n>We introduce the Correlated Knowledge Attack Agent (CKA-Agent), a dynamic framework that reframes jailbreaking as an adaptive, tree-structured exploration of the target model's knowledge base.
arXiv Detail & Related papers (2025-12-01T07:05:23Z) - Bag of Tricks for Subverting Reasoning-based Safety Guardrails [62.139297207938036]
We present a bag of jailbreak methods that subvert the reasoning-based guardrails.<n>Our attacks span white-, gray-, and black-box settings and range from effortless template manipulations to fully automated optimization.
arXiv Detail & Related papers (2025-10-13T16:16:44Z) - Malice in Agentland: Down the Rabbit Hole of Backdoors in the AI Supply Chain [82.98626829232899]
Fine-tuning AI agents on data from their own interactions introduces a critical security vulnerability within the AI supply chain.<n>We show that adversaries can easily poison the data collection pipeline to embed hard-to-detect backdoors.
arXiv Detail & Related papers (2025-10-03T12:47:21Z) - ASGuard: Activation-Scaling Guard to Mitigate Targeted Jailbreaking Attack [22.48980625853356]
Large language models (LLMs) exhibit brittle refusal behaviors that can be circumvented by simple linguistic changes.<n>In this work, we introduce Activation-Scaling Guard (ASGuard), an insightful, mechanistically-informed framework that surgically mitigates this specific vulnerability.
arXiv Detail & Related papers (2025-09-30T06:33:52Z) - GenBreak: Red Teaming Text-to-Image Generators Using Large Language Models [65.91565607573786]
Text-to-image (T2I) models can be misused to generate harmful content, including nudity or violence.<n>Recent research on red-teaming and adversarial attacks against T2I models has notable limitations.<n>We propose GenBreak, a framework that fine-tunes a red-team large language model (LLM) to systematically explore underlying vulnerabilities.
arXiv Detail & Related papers (2025-06-11T09:09:12Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.<n>The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.<n>We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Securing the Invisible Thread: A Comprehensive Analysis of BLE Tracker Security in Apple AirTags and Samsung SmartTags [0.0]
This study presents an in-depth analysis of the security landscape in Bluetooth Low Energy (BLE) tracking systems.
Our investigation traverses a wide spectrum of attack vectors such as physical tampering, firmware exploitation, signal spoofing, eavesdropping, jamming, app security flaws, Bluetooth security weaknesses, location spoofing, threats to owner devices, and cloud-related vulnerabilities.
arXiv Detail & Related papers (2024-01-24T16:50:54Z) - FedTracker: Furnishing Ownership Verification and Traceability for
Federated Learning Model [33.03362469978148]
Federated learning (FL) is a distributed machine learning paradigm allowing multiple clients to collaboratively train a global model without sharing their local data.
This poses a risk of unauthorized model distribution or resale by the malicious client, compromising the intellectual property rights of the FL group.
We present FedTracker, the first FL model protection framework that provides both ownership verification and traceability.
arXiv Detail & Related papers (2022-11-14T07:40:35Z) - Preserving Privacy and Security in Federated Learning [21.241705771577116]
We develop a principle framework that offers both privacy guarantees for users and detection against poisoning attacks from them.
Our framework enables the central server to identify poisoned model updates without violating the privacy guarantees of secure aggregation.
arXiv Detail & Related papers (2022-02-07T18:40:38Z) - Mind the GAP: Security & Privacy Risks of Contact Tracing Apps [75.7995398006171]
Google and Apple have jointly provided an API for exposure notification in order to implement decentralized contract tracing apps using Bluetooth Low Energy.
We demonstrate that in real-world scenarios the GAP design is vulnerable to (i) profiling and possibly de-anonymizing persons, and (ii) relay-based wormhole attacks that basically can generate fake contacts.
arXiv Detail & Related papers (2020-06-10T16:05:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.