Measuring the Effectiveness of Privacy Policies for Voice Assistant
Applications
- URL: http://arxiv.org/abs/2007.14570v1
- Date: Wed, 29 Jul 2020 03:17:51 GMT
- Title: Measuring the Effectiveness of Privacy Policies for Voice Assistant
Applications
- Authors: Song Liao, Christin Wilson, Long Cheng, Hongxin Hu, Huixing Deng
- Abstract summary: We conduct the first large-scale data analytics to systematically measure the effectiveness of privacy policies provided by voice-app developers.
We analyzed 64,720 Amazon Alexa skills and 2,201 Google Assistant actions.
Our findings reveal a worrisome reality of privacy policies in two mainstream voice-app stores.
- Score: 12.150750035659383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Voice Assistants (VA) such as Amazon Alexa and Google Assistant are quickly
and seamlessly integrating into people's daily lives. The increased reliance on
VA services raises privacy concerns such as the leakage of private
conversations and sensitive information. Privacy policies play an important
role in addressing users' privacy concerns and informing them about the data
collection, storage, and sharing practices. VA platforms (both Amazon Alexa and
Google Assistant) allow third-party developers to build new voice-apps and
publish them to the app store. Voice-app developers are required to provide
privacy policies to disclose their apps' data practices. However, little is
known whether these privacy policies are informative and trustworthy or not on
emerging VA platforms. On the other hand, many users invoke voice-apps through
voice and thus there exists a usability challenge for users to access these
privacy policies. In this paper, we conduct the first large-scale data
analytics to systematically measure the effectiveness of privacy policies
provided by voice-app developers on two mainstream VA platforms. We seek to
understand the quality and usability issues of privacy policies provided by
developers in the current app stores. We analyzed 64,720 Amazon Alexa skills
and 2,201 Google Assistant actions. Our work also includes a user study to
understand users' perspectives on VA's privacy policies. Our findings reveal a
worrisome reality of privacy policies in two mainstream voice-app stores, where
there exists a substantial number of problematic privacy policies.
Surprisingly, Google and Amazon even have official voice-apps violating their
own requirements regarding the privacy policy.
Related papers
- A Large-Scale Privacy Assessment of Android Third-Party SDKs [17.245330733308375]
Third-party Software Development Kits (SDKs) are widely adopted in Android app development.
This convenience raises substantial concerns about unauthorized access to users' privacy-sensitive information.
Our study offers a targeted analysis of user privacy protection among Android third-party SDKs.
arXiv Detail & Related papers (2024-09-16T15:44:43Z) - VPVet: Vetting Privacy Policies of Virtual Reality Apps [27.62581114396347]
Virtual reality (VR) apps can harvest a wider range of user data than web/mobile apps running on personal computers or smartphones.
Existing law and privacy regulations emphasize that VR developers should inform users of what data are collected/used/shared (CUS) through privacy policies.
We propose VPVet to automatically vet privacy policy compliance issues for VR apps.
arXiv Detail & Related papers (2024-09-01T15:07:11Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - ATLAS: Automatically Detecting Discrepancies Between Privacy Policies
and Privacy Labels [2.457872341625575]
We introduce the Automated Privacy Label Analysis System (ATLAS)
ATLAS identifies possible discrepancies between mobile app privacy policies and their privacy labels.
We find that, on average, apps have 5.32 such potential compliance issues.
arXiv Detail & Related papers (2023-05-24T05:27:22Z) - PLUE: Language Understanding Evaluation Benchmark for Privacy Policies
in English [77.79102359580702]
We introduce the Privacy Policy Language Understanding Evaluation benchmark, a multi-task benchmark for evaluating the privacy policy language understanding.
We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training.
We demonstrate that domain-specific continual pre-training offers performance improvements across all tasks.
arXiv Detail & Related papers (2022-12-20T05:58:32Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Privacy Policies Across the Ages: Content and Readability of Privacy
Policies 1996--2021 [1.5229257192293197]
We analyze the 25-year history of privacy policies using methods from transparency research, machine learning, and natural language processing.
We collect a large-scale longitudinal corpus of privacy policies from 1996 to 2021.
Our results show that policies are getting longer and harder to read, especially after new regulations take effect.
arXiv Detail & Related papers (2022-01-21T15:13:02Z) - Analysis of Longitudinal Changes in Privacy Behavior of Android
Applications [79.71330613821037]
In this paper, we examine the trends in how Android apps have changed over time with respect to privacy.
We examine the adoption of HTTPS, whether apps scan the device for other installed apps, the use of permissions for privacy-sensitive data, and the use of unique identifiers.
We find that privacy-related behavior has improved with time as apps continue to receive updates, and that the third-party libraries used by apps are responsible for more issues with privacy.
arXiv Detail & Related papers (2021-12-28T16:21:31Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - The Challenges and Impact of Privacy Policy Comprehension [0.0]
This paper experimentally manipulated the privacy-friendliness of an unavoidable and simple privacy policy.
Half of our participants miscomprehended even this transparent privacy policy.
To mitigate such pitfalls we present design recommendations to improve the quality of informed consent.
arXiv Detail & Related papers (2020-05-18T14:16:48Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.