(Mis-)Informed Consent: Predatory Apps and the Exploitation of Populations with Limited Literacy
- URL: http://arxiv.org/abs/2601.17025v1
- Date: Fri, 16 Jan 2026 20:23:33 GMT
- Title: (Mis-)Informed Consent: Predatory Apps and the Exploitation of Populations with Limited Literacy
- Authors: Muhammad Muneeb Pervez, Muhammad Qasim Atiq Ullah, Ibrahim Ahmed Khan, Roshnik Rahat, Muhammad Fareed Zaffar, Rashid Tahir, Talal Rahwan, Yasir Zaki,
- Abstract summary: This paper examines how informed consent is often abused by predatory financial applications.<n>We analyze a dataset of 50 Google Play Store apps to measure how many omit or obfuscate critical privacy disclosures.<n>Our findings show that 85% of study participants did not understand basic app permissions.
- Score: 1.5370108793508594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Among populations with limited literacy in emerging digital markets, the adoption of mobile phones, combined with comprehension barriers and poor cybersecurity hygiene, has created hidden privacy risks. This paper examines how informed consent is often abused by predatory financial applications, leading to financial scams that disproportionately affect users with low literacy. We focus on predatory loan, gambling, and trading apps, analyzing a dataset of 50 Google Play Store apps to measure how many omit or obfuscate critical privacy disclosures. We also evaluate comprehension gaps among users with low literacy via a targeted user study and assess whether Large Language Model (LLM)-generated summaries, translations, and visual cues can improve consent clarity. Our findings show that 85% of study participants did not understand basic app permissions, underscoring the urgent need for stronger regulatory oversight and scalable LLM-driven privacy-literacy tools.
Related papers
- When Ads Become Profiles: Large-Scale Audit of Algorithmic Biases and LLM Profiling Risks [10.267951162011475]
Automated ad targeting on social media is opaque, creating risks of exploitation and invisibility to external scrutiny.<n>We introduce a multi-stage auditing framework to investigate these risks.<n>A large-scale audit of over 435,000 ad impressions delivered to 891 Australian Facebook users reveals algorithmic biases.
arXiv Detail & Related papers (2025-09-23T10:10:37Z) - Evaluating Language Model Reasoning about Confidential Information [95.64687778185703]
We study whether language models exhibit contextual robustness, or the capability to adhere to context-dependent safety specifications.<n>We develop a benchmark (PasswordEval) that measures whether language models can correctly determine when a user request is authorized.<n>We find that current open- and closed-source models struggle with this seemingly simple task, and that, perhaps surprisingly, reasoning capabilities do not generally improve performance.
arXiv Detail & Related papers (2025-08-27T15:39:46Z) - SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation [9.414685411687735]
Large language models (LLMs) are sophisticated artificial intelligence systems that enable machines to generate human-like text with remarkable precision.<n>This paper provides a comprehensive analysis of privacy in LLMs, categorizing the challenges into four main areas.<n>We evaluate the effectiveness and limitations of existing mitigation mechanisms targeting these proposed privacy challenges and identify areas for further research.
arXiv Detail & Related papers (2025-06-15T03:14:03Z) - Deep Learning Approaches for Anti-Money Laundering on Mobile Transactions: Review, Framework, and Directions [51.43521977132062]
Money laundering is a financial crime that obscures the origin of illicit funds.<n>The proliferation of mobile payment platforms and smart IoT devices has significantly complicated anti-money laundering investigations.<n>This paper conducts a comprehensive review of deep learning solutions and the challenges associated with their use in AML.
arXiv Detail & Related papers (2025-03-13T05:19:44Z) - Are We There Yet? Revealing the Risks of Utilizing Large Language Models in Scholarly Peer Review [66.73247554182376]
Large language models (LLMs) have led to their integration into peer review.<n>The unchecked adoption of LLMs poses significant risks to the integrity of the peer review system.<n>We show that manipulating 5% of the reviews could potentially cause 12% of the papers to lose their position in the top 30% rankings.
arXiv Detail & Related papers (2024-12-02T16:55:03Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Rescriber: Smaller-LLM-Powered User-Led Data Minimization for LLM-Based Chatbots [2.2447085410328103]
Rescriber is a browser extension that supports user-led data minimization in LLM-based conversational agents.<n>Our studies showed that Rescriber helped users reduce unnecessary disclosure and addressed their privacy concerns.<n>Our findings confirm the viability of smaller-LLM-powered, user-facing, on-device privacy controls.
arXiv Detail & Related papers (2024-10-10T01:23:16Z) - CLAMBER: A Benchmark of Identifying and Clarifying Ambiguous Information Needs in Large Language Models [60.59638232596912]
We introduce CLAMBER, a benchmark for evaluating large language models (LLMs)
Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.
Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries.
arXiv Detail & Related papers (2024-05-20T14:34:01Z) - The Adoption and Efficacy of Large Language Models: Evidence From Consumer Complaints in the Financial Industry [2.300664273021602]
This research explores the effect of Large Language Models (LLMs) on consumer complaints submitted to the Consumer Financial Protection Bureau from 2015 to 2024.<n>We find that LLM usage is associated with an increased likelihood of obtaining relief from financial firms.
arXiv Detail & Related papers (2023-11-28T04:07:34Z) - "It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents [27.480959048351973]
The widespread use of Large Language Model (LLM)-based conversational agents (CAs) raises many privacy concerns.
We analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users.
We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs.
arXiv Detail & Related papers (2023-09-20T21:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.