Investigation of the Privacy Concerns in AI Systems for Young Digital Citizens: A Comparative Stakeholder Analysis
- URL: http://arxiv.org/abs/2501.13321v1
- Date: Thu, 23 Jan 2025 02:07:45 GMT
- Title: Investigation of the Privacy Concerns in AI Systems for Young Digital Citizens: A Comparative Stakeholder Analysis
- Authors: Molly Campbell, Ankur Barthwal, Sandhya Joshi, Austin Shouli, Ajay Kumar Shrestha,
- Abstract summary: The study underscores the need for user-centric privacy controls, tailored transparency strategies, and targeted educational initiatives.
incorporating diverse stakeholder perspectives offers actionable insights into ethical AI design and governance.
- Score: 0.0
- License:
- Abstract: The integration of Artificial Intelligence (AI) systems into technologies used by young digital citizens raises significant privacy concerns. This study investigates these concerns through a comparative analysis of stakeholder perspectives. A total of 252 participants were surveyed, with the analysis focusing on 110 valid responses from parents/educators and 100 from AI professionals after data cleaning. Quantitative methods, including descriptive statistics and Partial Least Squares Structural Equation Modeling, examined five validated constructs: Data Ownership and Control, Parental Data Sharing, Perceived Risks and Benefits, Transparency and Trust, and Education and Awareness. Results showed Education and Awareness significantly influenced data ownership and risk assessment, while Data Ownership and Control strongly impacted Transparency and Trust. Transparency and Trust, along with Perceived Risks and Benefits, showed minimal influence on Parental Data Sharing, suggesting other factors may play a larger role. The study underscores the need for user-centric privacy controls, tailored transparency strategies, and targeted educational initiatives. Incorporating diverse stakeholder perspectives offers actionable insights into ethical AI design and governance, balancing innovation with robust privacy protections to foster trust in a digital age.
Related papers
- Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives [0.0]
This study explores stakeholder perspectives on privacy in AI systems, focusing on educators, parents, and AI professionals.
Using qualitative analysis of survey responses from 227 participants, the research identifies key privacy risks, including data breaches, ethical misuse, and excessive data collection.
The findings provide actionable insights into balancing the benefits of AI with robust privacy protections.
arXiv Detail & Related papers (2025-01-23T02:06:25Z) - Navigating AI to Unpack Youth Privacy Concerns: An In-Depth Exploration and Systematic Review [0.0]
This systematic literature review investigates perceptions, concerns, and expectations of young digital citizens regarding privacy in artificial intelligence (AI) systems.
Data extraction focused on privacy concerns, data-sharing practices, the balance between privacy and utility, trust factors in AI, and strategies to enhance user control over personal data.
Findings reveal significant privacy concerns among young users, including a perceived lack of control over personal information, potential misuse of data by AI, and fears of data breaches and unauthorized access.
arXiv Detail & Related papers (2024-12-20T22:00:06Z) - Analyzing the Impact of AI Tools on Student Study Habits and Academic Performance [0.0]
The research focuses on how AI tools can support personalized learning, adaptive test adjustments, and provide real-time classroom analysis.
Student feedback revealed strong support for these features, and the study found a significant reduction in study hours alongside an increase in GPA.
Despite these benefits, challenges such as over-reliance on AI and difficulties in integrating AI with traditional teaching methods were also identified.
arXiv Detail & Related papers (2024-12-03T04:51:57Z) - Mapping Public Perception of Artificial Intelligence: Expectations, Risk-Benefit Tradeoffs, and Value As Determinants for Societal Acceptance [0.20971479389679332]
Using a representative sample of 1100 participants from Germany, this study examines mental models of AI.
Participants quantitatively evaluated 71 statements about AI's future capabilities.
We present rankings of these projections alongside visual mappings illustrating public risk-benefit tradeoffs.
arXiv Detail & Related papers (2024-11-28T20:03:01Z) - Multi-stakeholder Perspective on Responsible Artificial Intelligence and
Acceptability in Education [0.0]
The study investigates the acceptability of different AI applications in education from a multi-stakeholder perspective.
It addresses concerns related to data privacy, AI agency, transparency, explainability and the ethical deployment of AI.
arXiv Detail & Related papers (2024-02-22T23:59:59Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.