The Impact of Privacy and Security Attitudes and Concerns of Travellers
on Their Willingness to Use Mobility-as-a-Service Systems
- URL: http://arxiv.org/abs/2312.00519v2
- Date: Sat, 17 Feb 2024 20:47:03 GMT
- Title: The Impact of Privacy and Security Attitudes and Concerns of Travellers
on Their Willingness to Use Mobility-as-a-Service Systems
- Authors: Maria Sophia Heering, Haiyue Yuan, Shujun Li
- Abstract summary: This paper reports results from an online survey on the impact of travellers' privacy and security attitudes and concerns on their willingness to use MaaS systems.
Neither attitudes nor concerns of participants over the privacy and security of personal data would significantly impact their decisions to use MaaS systems.
Having been a victim of improper invasion of privacy did not appear to affect individuals' intentions to use MaaS systems.
- Score: 2.532202013576547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper reports results from an online survey on the impact of travellers'
privacy and security attitudes and concerns on their willingness to use
mobility-as-a-service (MaaS) systems. This study is part of a larger project
that aims at investigating barriers to potential MaaS uptake. The online survey
was designed to cover data privacy and security attitudes and concerns as well
as a variety of socio-psychological and socio-demographic variables associated
with travellers' intentions to use MaaS systems. The study involved $n=320$ UK
participants recruited via the Prolific survey platform. Overall, correlation
analysis and a multiple regression model indicated that, neither attitudes nor
concerns of participants over the privacy and security of personal data would
significantly impact their decisions to use MaaS systems, which was an
unexpected result, however, their trust in (commercial and governmental)
websites would. Another surprising result is that, having been a victim of
improper invasion of privacy did not appear to affect individuals' intentions
to use MaaS systems, whereas frequency with which one heard about misuse of
personal data did. Implications of the results and future directions are also
discussed, e.g., MaaS providers are encouraged to work on improving the
trustworthiness of their corporate image.
Related papers
- Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning [10.473137837891162]
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
arXiv Detail & Related papers (2023-11-30T02:19:35Z) - Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation [20.526071564917274]
Mobility data can be used to build machine learning (ML) models for location-based services (LBS)
However, the convenience comes with the risk of privacy leakage since this type of data might contain sensitive information related to user identities, such as home/work locations.
We design a privacy attack suite containing data extraction and membership inference attacks tailored for point-of-interest (POI) recommendation models.
arXiv Detail & Related papers (2023-10-28T06:17:52Z) - Data privacy for Mobility as a Service [3.6474839708864497]
Mobility as a Service (M) is revolutionizing the transportation industry by offering convenient, efficient and integrated transportation solutions.
The extensive use of user data as well as the integration of multiple service providers raises significant privacy concerns.
arXiv Detail & Related papers (2023-09-18T21:58:35Z) - Security and Privacy on Generative Data in AIGC: A Survey [17.456578314457612]
We review the security and privacy on generative data in AIGC.
We reveal the successful experiences of state-of-the-art countermeasures in terms of the foundational properties of privacy, controllability, authenticity, and compliance.
arXiv Detail & Related papers (2023-09-18T02:35:24Z) - A Comprehensive Picture of Factors Affecting User Willingness to Use
Mobile Health Applications [62.60524178293434]
The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps.
Users' digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information.
Users' demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect.
arXiv Detail & Related papers (2023-05-10T08:11:21Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Survey: Leakage and Privacy at Inference Time [59.957056214792665]
Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance.
We focus on inference-time leakage, as the most likely scenario for publicly available models.
We propose a taxonomy across involuntary and malevolent leakage, available defences, followed by the currently available assessment metrics and applications.
arXiv Detail & Related papers (2021-07-04T12:59:16Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Users' Concern for Privacy in Context-Aware Reasoning Systems [0.17205106391379021]
People are more concerned about third parties accessing data gathered by environmental sensors as compared to physiological sensors.
Participants indicated greater concern about unfamiliar third parties as opposed to familiar third parties.
These concerns are predicted and (to a lesser degree) causally affected by people's beliefs about how much can be inferred from these types of data.
arXiv Detail & Related papers (2020-07-03T09:13:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.