Evaluating the Security and Privacy Risk Postures of Virtual Assistants
- URL: http://arxiv.org/abs/2312.14633v1
- Date: Fri, 22 Dec 2023 12:10:52 GMT
- Title: Evaluating the Security and Privacy Risk Postures of Virtual Assistants
- Authors: Borna Kalhor, Sanchari Das
- Abstract summary: We evaluated the security and privacy postures of eight widely used voice assistants: Alexa, Braina, Cortana, Google Assistant, Kalliope, Mycroft, Hound, and Extreme.
Results revealed that these VAs are vulnerable to a range of security threats.
These vulnerabilities could allow malicious actors to gain unauthorized access to users' personal information.
- Score: 3.1943453294492543
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Virtual assistants (VAs) have seen increased use in recent years due to their
ease of use for daily tasks. Despite their growing prevalence, their security
and privacy implications are still not well understood. To address this gap, we
conducted a study to evaluate the security and privacy postures of eight widely
used voice assistants: Alexa, Braina, Cortana, Google Assistant, Kalliope,
Mycroft, Hound, and Extreme. We used three vulnerability testing tools,
AndroBugs, RiskInDroid, and MobSF, to assess the security and privacy of these
VAs. Our analysis focused on five areas: code, access control, tracking, binary
analysis, and sensitive data confidentiality. The results revealed that these
VAs are vulnerable to a range of security threats, including not validating SSL
certificates, executing raw SQL queries, and using a weak mode of the AES
algorithm. These vulnerabilities could allow malicious actors to gain
unauthorized access to users' personal information. This study is a first step
toward understanding the risks associated with these technologies and provides
a foundation for future research to develop more secure and privacy-respecting
VAs.
Related papers
- Security Analysis of Top-Ranked mHealth Fitness Apps: An Empirical Study [0.32885740436059047]
We investigate the security vulnerabilities of ten top-ranked Android health and fitness apps, a set that accounts for 237 million downloads.
Our findings revealed many vulnerabilities, such as insecure coding, hardcoded sensitive information, over-privileged permissions, misconfiguration, and excessive communication with third-party domains.
arXiv Detail & Related papers (2024-09-27T08:11:45Z) - A Large-Scale Privacy Assessment of Android Third-Party SDKs [17.245330733308375]
Third-party Software Development Kits (SDKs) are widely adopted in Android app development.
This convenience raises substantial concerns about unauthorized access to users' privacy-sensitive information.
Our study offers a targeted analysis of user privacy protection among Android third-party SDKs.
arXiv Detail & Related papers (2024-09-16T15:44:43Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Understanding Hackers' Work: An Empirical Study of Offensive Security
Practitioners [0.0]
Offensive security-tests are a common way to pro-actively discover potential vulnerabilities.
The chronic lack of available white-hat hackers prevents sufficient security test coverage of software.
Research into automation tries to alleviate this problem by improving the efficiency of security testing.
arXiv Detail & Related papers (2023-08-14T10:35:26Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future
Directions [23.778855805039438]
We provide a new taxonomy for privacy risks and protection methods in AVs.
We categorize privacy in AVs into three levels: individual, population, and proprietary.
arXiv Detail & Related papers (2022-09-08T20:16:21Z) - Voting for the right answer: Adversarial defense for speaker
verification [79.10523688806852]
ASV is under the radar of adversarial attacks, which are similar to their original counterparts from human's perception.
We propose the idea of "voting for the right answer" to prevent risky decisions of ASV in blind spot areas.
Experimental results show that our proposed method improves the robustness against both the limited-knowledge attackers.
arXiv Detail & Related papers (2021-06-15T04:05:28Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - Privacy in Deep Learning: A Survey [16.278779275923448]
The ever-growing advances of deep learning in many areas have led to the adoption of Deep Neural Networks (DNNs) in production systems.
The availability of large datasets and high computational power are the main contributors to these advances.
This poses serious privacy concerns as this data can be misused or leaked through various vulnerabilities.
arXiv Detail & Related papers (2020-04-25T23:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.