Privacy as Contextual Integrity in Online Proctoring Systems in Higher
Education: A Scoping Review
- URL: http://arxiv.org/abs/2310.18792v1
- Date: Sat, 28 Oct 2023 19:35:39 GMT
- Title: Privacy as Contextual Integrity in Online Proctoring Systems in Higher
Education: A Scoping Review
- Authors: Mutimukwe Chantal, Han Shengnan, Viberg Olga, Cerratto-Pargman Teresa
- Abstract summary: Privacy is one of the key challenges to the adoption and implementation of online proctoring systems in higher education.
This study notifies a need to clarify how these principles should be implemented and sustained.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy is one of the key challenges to the adoption and implementation of
online proctoring systems in higher education. To better understand this
challenge, we adopt privacy as contextual integrity theory to conduct a scoping
review of 17 papers. The results show different types of students' personal and
sensitive information are collected and disseminated; this raises considerable
privacy concerns. As well as the governing principles including transparency
and fairness, consent and choice, information minimization, accountability, and
information security and accuracy have been identified to address privacy
problems. This study notifies a need to clarify how these principles should be
implemented and sustained, and what privacy concerns and actors they relate to.
Further, it calls for the need to clarify the responsibility of key actors in
enacting and sustaining responsible adoption and use of OPS in higher
education.
Related papers
- Differential Privacy in Machine Learning: From Symbolic AI to LLMs [49.1574468325115]
Differential privacy provides a formal framework to mitigate privacy risks.<n>It ensures that the inclusion or exclusion of any single data point does not significantly alter the output of an algorithm.
arXiv Detail & Related papers (2025-06-13T11:30:35Z) - Understanding the Relationship Between Personal Data Privacy Literacy and Data Privacy Information Sharing by University Students [1.6791044863781392]
This survey based study examines how university students in the United States perceive personal data privacy.<n>Students responses to a privacy literacy scale were categorized into high and low privacy literacy groups.
arXiv Detail & Related papers (2025-05-24T21:14:53Z) - PrivaCI-Bench: Evaluating Privacy with Contextual Integrity and Legal Compliance [44.287734754038254]
We present PrivaCI-Bench, a contextual privacy evaluation benchmark for generative large language models (LLMs)
We evaluate the latest LLMs, including the recent reasoner models QwQ-32B and Deepseek R1.
Our experimental results suggest that though LLMs can effectively capture key CI parameters inside a given context, they still require further advancements for privacy compliance.
arXiv Detail & Related papers (2025-02-24T10:49:34Z) - Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives [0.0]
This study explores stakeholder perspectives on privacy in AI systems, focusing on educators, parents, and AI professionals.
Using qualitative analysis of survey responses from 227 participants, the research identifies key privacy risks, including data breaches, ethical misuse, and excessive data collection.
The findings provide actionable insights into balancing the benefits of AI with robust privacy protections.
arXiv Detail & Related papers (2025-01-23T02:06:25Z) - Navigating AI to Unpack Youth Privacy Concerns: An In-Depth Exploration and Systematic Review [0.0]
This systematic literature review investigates perceptions, concerns, and expectations of young digital citizens regarding privacy in artificial intelligence (AI) systems.
Data extraction focused on privacy concerns, data-sharing practices, the balance between privacy and utility, trust factors in AI, and strategies to enhance user control over personal data.
Findings reveal significant privacy concerns among young users, including a perceived lack of control over personal information, potential misuse of data by AI, and fears of data breaches and unauthorized access.
arXiv Detail & Related papers (2024-12-20T22:00:06Z) - Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI [1.325665193924634]
This paper takes a critical approach towards examining the intricacies of these issues within AI companion services.
We analyze articles from public media about the company and its practices to gain insight into the trustworthiness of information provided in the policy.
The results reveal despite privacy notices, data collection practices might harvest personal data without users' full awareness.
arXiv Detail & Related papers (2024-11-07T07:36:19Z) - Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory [43.12744258781724]
We formulate the privacy issue as a reasoning problem rather than simple pattern matching.
We develop the first comprehensive checklist that covers social identities, private attributes, and existing privacy regulations.
arXiv Detail & Related papers (2024-08-19T14:48:04Z) - A Multivocal Literature Review on Privacy and Fairness in Federated Learning [1.6124402884077915]
Federated learning presents a way to revolutionize AI applications by eliminating the necessity for data sharing.
Recent research has demonstrated an inherent tension between privacy and fairness.
We argue that the relationship between privacy and fairness has been neglected, posing a critical risk for real-world applications.
arXiv Detail & Related papers (2024-08-16T11:15:52Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Assessing Mobile Application Privacy: A Quantitative Framework for Privacy Measurement [0.0]
This work aims to contribute to a digital environment that prioritizes privacy, promotes informed decision-making, and endorses the privacy-preserving design principles.
The purpose of this framework is to systematically evaluate the level of privacy risk when using particular Android applications.
arXiv Detail & Related papers (2023-10-31T18:12:19Z) - A Critical Take on Privacy in a Datafied Society [0.0]
I analyze several facets of the lack of online privacy and idiosyncrasies exhibited by privacy advocates.
I discuss of possible effects of datafication on human behavior, the prevalent market-oriented assumption at the base of online privacy, and some emerging adaptation strategies.
A glimpse on the likely problematic future is provided with a discussion on privacy related aspects of EU, UK, and China's proposed generative AI policies.
arXiv Detail & Related papers (2023-08-03T11:45:18Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - Advancing Differential Privacy: Where We Are Now and Future Directions for Real-World Deployment [100.1798289103163]
We present a detailed review of current practices and state-of-the-art methodologies in the field of differential privacy (DP)
Key points and high-level contents of the article were originated from the discussions from "Differential Privacy (DP): Challenges Towards the Next Frontier"
This article aims to provide a reference point for the algorithmic and design decisions within the realm of privacy, highlighting important challenges and potential research directions.
arXiv Detail & Related papers (2023-04-14T05:29:18Z) - Differential Privacy and Fairness in Decisions and Learning Tasks: A
Survey [50.90773979394264]
It reviews the conditions under which privacy and fairness may have aligned or contrasting goals.
It analyzes how and why DP may exacerbate bias and unfairness in decision problems and learning tasks.
arXiv Detail & Related papers (2022-02-16T16:50:23Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.