Measuring the Effectiveness of Privacy Policies for Voice Assistant
Applications
- URL: http://arxiv.org/abs/2007.14570v1
- Date: Wed, 29 Jul 2020 03:17:51 GMT
- Title: Measuring the Effectiveness of Privacy Policies for Voice Assistant
Applications
- Authors: Song Liao, Christin Wilson, Long Cheng, Hongxin Hu, Huixing Deng
- Abstract summary: We conduct the first large-scale data analytics to systematically measure the effectiveness of privacy policies provided by voice-app developers.
We analyzed 64,720 Amazon Alexa skills and 2,201 Google Assistant actions.
Our findings reveal a worrisome reality of privacy policies in two mainstream voice-app stores.
- Score: 12.150750035659383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Voice Assistants (VA) such as Amazon Alexa and Google Assistant are quickly
and seamlessly integrating into people's daily lives. The increased reliance on
VA services raises privacy concerns such as the leakage of private
conversations and sensitive information. Privacy policies play an important
role in addressing users' privacy concerns and informing them about the data
collection, storage, and sharing practices. VA platforms (both Amazon Alexa and
Google Assistant) allow third-party developers to build new voice-apps and
publish them to the app store. Voice-app developers are required to provide
privacy policies to disclose their apps' data practices. However, little is
known whether these privacy policies are informative and trustworthy or not on
emerging VA platforms. On the other hand, many users invoke voice-apps through
voice and thus there exists a usability challenge for users to access these
privacy policies. In this paper, we conduct the first large-scale data
analytics to systematically measure the effectiveness of privacy policies
provided by voice-app developers on two mainstream VA platforms. We seek to
understand the quality and usability issues of privacy policies provided by
developers in the current app stores. We analyzed 64,720 Amazon Alexa skills
and 2,201 Google Assistant actions. Our work also includes a user study to
understand users' perspectives on VA's privacy policies. Our findings reveal a
worrisome reality of privacy policies in two mainstream voice-app stores, where
there exists a substantial number of problematic privacy policies.
Surprisingly, Google and Amazon even have official voice-apps violating their
own requirements regarding the privacy policy.
Related papers
- Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Toward the Cure of Privacy Policy Reading Phobia: Automated Generation
of Privacy Nutrition Labels From Privacy Policies [19.180437130066323]
We propose the first framework that can automatically generate privacy nutrition labels from privacy policies.
Based on our ground truth applications about the Data Safety Report from the Google Play app store, our framework achieves a 0.75 F1-score on generating first-party data collection practices.
We also analyse the inconsistencies between ground truth and curated privacy nutrition labels on the market, and our framework can detect 90.1% under-claim issues.
arXiv Detail & Related papers (2023-06-19T13:33:44Z) - ATLAS: Automatically Detecting Discrepancies Between Privacy Policies
and Privacy Labels [2.457872341625575]
We introduce the Automated Privacy Label Analysis System (ATLAS)
ATLAS identifies possible discrepancies between mobile app privacy policies and their privacy labels.
We find that, on average, apps have 5.32 such potential compliance issues.
arXiv Detail & Related papers (2023-05-24T05:27:22Z) - Security and Privacy Problems in Voice Assistant Applications: A Survey [10.10499765108625]
Security and privacy threats have emerged with the rapid development of the Internet of Things (IoT)
The security issues researched include attack techniques toward machine learning models and other hardware components widely used in voice assistant applications.
This paper concludes and assesses five kinds of security attacks and three types of privacy threats in the papers published in the top-tier conferences of cyber security and voice domain.
arXiv Detail & Related papers (2023-04-19T08:17:01Z) - PLUE: Language Understanding Evaluation Benchmark for Privacy Policies
in English [77.79102359580702]
We introduce the Privacy Policy Language Understanding Evaluation benchmark, a multi-task benchmark for evaluating the privacy policy language understanding.
We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training.
We demonstrate that domain-specific continual pre-training offers performance improvements across all tasks.
arXiv Detail & Related papers (2022-12-20T05:58:32Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Privacy Policies Across the Ages: Content and Readability of Privacy
Policies 1996--2021 [1.5229257192293197]
We analyze the 25-year history of privacy policies using methods from transparency research, machine learning, and natural language processing.
We collect a large-scale longitudinal corpus of privacy policies from 1996 to 2021.
Our results show that policies are getting longer and harder to read, especially after new regulations take effect.
arXiv Detail & Related papers (2022-01-21T15:13:02Z) - Analysis of Longitudinal Changes in Privacy Behavior of Android
Applications [79.71330613821037]
In this paper, we examine the trends in how Android apps have changed over time with respect to privacy.
We examine the adoption of HTTPS, whether apps scan the device for other installed apps, the use of permissions for privacy-sensitive data, and the use of unique identifiers.
We find that privacy-related behavior has improved with time as apps continue to receive updates, and that the third-party libraries used by apps are responsible for more issues with privacy.
arXiv Detail & Related papers (2021-12-28T16:21:31Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - The Challenges and Impact of Privacy Policy Comprehension [0.0]
This paper experimentally manipulated the privacy-friendliness of an unavoidable and simple privacy policy.
Half of our participants miscomprehended even this transparent privacy policy.
To mitigate such pitfalls we present design recommendations to improve the quality of informed consent.
arXiv Detail & Related papers (2020-05-18T14:16:48Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.