Privacy in Speech Technology
- URL: http://arxiv.org/abs/2305.05227v2
- Date: Tue, 18 Jun 2024 10:00:26 GMT
- Title: Privacy in Speech Technology
- Authors: Tom Bäckström,
- Abstract summary: This paper is a tutorial on privacy issues related to speech technology.
Models threats, approaches for protecting users' privacy, measuring the performance of privacy-protecting methods.
Also presents lines for further development where improvements are most urgently needed.
- Score: 8.99795279111323
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Speech technology for communication, accessing information and services has rapidly improved in quality. It is convenient and appealing because speech is the primary mode of communication for humans. Such technology however also presents proven threats to privacy. Speech is a tool for communication and it will thus inherently contain private information. Importantly, it however also contains a wealth of side information, such as information related to health, emotions, affiliations, and relationships, all of which are private. Exposing such private information can lead to serious threats such as price gouging, harassment, extortion, and stalking. This paper is a tutorial on privacy issues related to speech technology, modeling their threats, approaches for protecting users' privacy, measuring the performance of privacy-protecting methods, perception of privacy as well as societal and legal consequences. In addition to a tutorial overview, it also presents lines for further development where improvements are most urgently needed.
Related papers
- Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration [18.11846784025521]
PrivacyRestore is a plug-and-play method to protect the privacy of user inputs during inference.
We create three datasets, covering medical and legal domains, to evaluate the effectiveness of PrivacyRestore.
arXiv Detail & Related papers (2024-06-03T14:57:39Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Momentum Gradient Descent Federated Learning with Local Differential
Privacy [10.60240656423935]
In the big data era, the privacy of personal information has been more pronounced.
In this article, we propose integrating federated learning and local differential privacy with momentum gradient descent to improve the performance of machine learning models.
arXiv Detail & Related papers (2022-09-28T13:30:38Z) - The Evolving Path of "the Right to Be Left Alone" - When Privacy Meets
Technology [0.0]
This paper proposes a novel vision of the privacy ecosystem, introducing privacy dimensions, the related users' expectations, the privacy violations, and the changing factors.
We believe that promising approaches to tackle the privacy challenges move in two directions: (i) identification of effective privacy metrics; and (ii) adoption of formal tools to design privacy-compliant applications.
arXiv Detail & Related papers (2021-11-24T11:27:55Z) - The Challenges and Impact of Privacy Policy Comprehension [0.0]
This paper experimentally manipulated the privacy-friendliness of an unavoidable and simple privacy policy.
Half of our participants miscomprehended even this transparent privacy policy.
To mitigate such pitfalls we present design recommendations to improve the quality of informed consent.
arXiv Detail & Related papers (2020-05-18T14:16:48Z) - A vision for global privacy bridges: Technical and legal measures for
international data markets [77.34726150561087]
Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil"
An open conflict is arising between business demands for data and a desire for privacy.
We propose and test a vision of a personal information market with privacy.
arXiv Detail & Related papers (2020-05-13T13:55:50Z) - Beyond privacy regulations: an ethical approach to data usage in
transportation [64.86110095869176]
We describe how Federated Machine Learning can be applied to the transportation sector.
We see Federated Learning as a method that enables us to process privacy-sensitive data, while respecting customer's privacy.
arXiv Detail & Related papers (2020-04-01T15:10:12Z) - Dis-Empowerment Online: An Investigation of Privacy-Sharing Perceptions
& Method Preferences [6.09170287691728]
We find that perception of privacy empowerment differs from that of sharing across dimensions of meaningfulness, competence and choice.
We find similarities and differences in privacy method preference between the US, UK and Germany.
By mapping the perception of privacy dis-empowerment into patterns of privacy behavior online, this paper provides an important foundation for future research.
arXiv Detail & Related papers (2020-03-19T19:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.