The Impact of Privacy and Security Attitudes and Concerns of Travellers
on Their Willingness to Use Mobility-as-a-Service Systems
- URL: http://arxiv.org/abs/2312.00519v2
- Date: Sat, 17 Feb 2024 20:47:03 GMT
- Title: The Impact of Privacy and Security Attitudes and Concerns of Travellers
on Their Willingness to Use Mobility-as-a-Service Systems
- Authors: Maria Sophia Heering, Haiyue Yuan, Shujun Li
- Abstract summary: This paper reports results from an online survey on the impact of travellers' privacy and security attitudes and concerns on their willingness to use MaaS systems.
Neither attitudes nor concerns of participants over the privacy and security of personal data would significantly impact their decisions to use MaaS systems.
Having been a victim of improper invasion of privacy did not appear to affect individuals' intentions to use MaaS systems.
- Score: 2.532202013576547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper reports results from an online survey on the impact of travellers'
privacy and security attitudes and concerns on their willingness to use
mobility-as-a-service (MaaS) systems. This study is part of a larger project
that aims at investigating barriers to potential MaaS uptake. The online survey
was designed to cover data privacy and security attitudes and concerns as well
as a variety of socio-psychological and socio-demographic variables associated
with travellers' intentions to use MaaS systems. The study involved $n=320$ UK
participants recruited via the Prolific survey platform. Overall, correlation
analysis and a multiple regression model indicated that, neither attitudes nor
concerns of participants over the privacy and security of personal data would
significantly impact their decisions to use MaaS systems, which was an
unexpected result, however, their trust in (commercial and governmental)
websites would. Another surprising result is that, having been a victim of
improper invasion of privacy did not appear to affect individuals' intentions
to use MaaS systems, whereas frequency with which one heard about misuse of
personal data did. Implications of the results and future directions are also
discussed, e.g., MaaS providers are encouraged to work on improving the
trustworthiness of their corporate image.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Enabling Humanitarian Applications with Targeted Differential Privacy [0.39462888523270856]
This paper develops an approach to implementing algorithmic decisions based on personal data.
It provides formal privacy guarantees to data subjects.
We show that stronger privacy guarantees typically come at some cost.
arXiv Detail & Related papers (2024-08-24T01:34:37Z) - Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions [12.451936012379319]
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various domains.
Their reliance on massive internet-sourced datasets for training brings notable privacy issues.
Certain application-specific scenarios may require fine-tuning these models on private data.
arXiv Detail & Related papers (2024-08-10T05:41:19Z) - Differentially Private Data Release on Graphs: Inefficiencies and Unfairness [48.96399034594329]
This paper characterizes the impact of Differential Privacy on bias and unfairness in the context of releasing information about networks.
We consider a network release problem where the network structure is known to all, but the weights on edges must be released privately.
Our work provides theoretical foundations and empirical evidence into the bias and unfairness arising due to privacy in these networked decision problems.
arXiv Detail & Related papers (2024-08-08T08:37:37Z) - Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning [10.473137837891162]
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
arXiv Detail & Related papers (2023-11-30T02:19:35Z) - Data privacy for Mobility as a Service [3.6474839708864497]
Mobility as a Service (M) is revolutionizing the transportation industry by offering convenient, efficient and integrated transportation solutions.
The extensive use of user data as well as the integration of multiple service providers raises significant privacy concerns.
arXiv Detail & Related papers (2023-09-18T21:58:35Z) - Security and Privacy on Generative Data in AIGC: A Survey [17.456578314457612]
We review the security and privacy on generative data in AIGC.
We reveal the successful experiences of state-of-the-art countermeasures in terms of the foundational properties of privacy, controllability, authenticity, and compliance.
arXiv Detail & Related papers (2023-09-18T02:35:24Z) - A Comprehensive Picture of Factors Affecting User Willingness to Use
Mobile Health Applications [62.60524178293434]
The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps.
Users' digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information.
Users' demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect.
arXiv Detail & Related papers (2023-05-10T08:11:21Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Survey: Leakage and Privacy at Inference Time [59.957056214792665]
Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance.
We focus on inference-time leakage, as the most likely scenario for publicly available models.
We propose a taxonomy across involuntary and malevolent leakage, available defences, followed by the currently available assessment metrics and applications.
arXiv Detail & Related papers (2021-07-04T12:59:16Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.