Investigating the Security & Privacy Risks from Unsanctioned Technology Use by Educators
- URL: http://arxiv.org/abs/2502.16739v1
- Date: Sun, 23 Feb 2025 22:52:58 GMT
- Title: Investigating the Security & Privacy Risks from Unsanctioned Technology Use by Educators
- Authors: Easton Kelso, Ananta Soneji, Syed Zami-Ul-Haque Navid, Yan Soshitaishvili, Sazzadur Rahaman, Rakibul Hasan,
- Abstract summary: This study aims to understand why instructors use unsanctioned applications, how instructors perceive the associated risks, and how it affects institutional security and privacy postures.<n>We designed and conducted an online survey-based study targeting instructors and administrators from K-12 and higher education institutions.
- Score: 8.785737074008576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Educational technologies are revolutionizing how educational institutions operate. Consequently, it makes them a lucrative target for breach and abuse as they often serve as centralized hubs for diverse types of sensitive data, from academic records to health information. Existing studies looked into how existing stakeholders perceive the security and privacy risks of educational technologies and how those risks are affecting institutional policies for acquiring new technologies. However, outside of institutional vetting and approval, there is a pervasive practice of using applications and devices acquired personally. It is unclear how these applications and devices affect the dynamics of the overall institutional ecosystem. This study aims to address this gap by understanding why instructors use unsanctioned applications, how instructors perceive the associated risks, and how it affects institutional security and privacy postures. We designed and conducted an online survey-based study targeting instructors and administrators from K-12 and higher education institutions.
Related papers
- The Role of AI, Blockchain, Cloud, and Data (ABCD) in Enhancing Learning Assessments of College Students [0.0]
This study investigates how ABCD technologies can improve learning assessments in higher education.
The objective is to research how students perceive things, plan their behavior, and how ABCD technologies affect individual learning, academic integrity, co-learning, and trust in the assessment.
arXiv Detail & Related papers (2025-02-17T15:11:44Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Machine Learning-Assisted Intrusion Detection for Enhancing Internet of Things Security [1.2369895513397127]
Attacks against the Internet of Things (IoT) are rising as devices, applications, and interactions become more networked and integrated.
To efficiently secure IoT devices, real-time detection of intrusion systems is critical.
This paper investigates the latest research on machine learning-based intrusion detection strategies for IoT security.
arXiv Detail & Related papers (2024-10-01T19:24:34Z) - Threats, Attacks, and Defenses in Machine Unlearning: A Survey [14.03428437751312]
Machine Unlearning (MU) has recently gained considerable attention due to its potential to achieve Safe AI.<n>This survey aims to fill the gap between the extensive number of studies on threats, attacks, and defenses in machine unlearning.
arXiv Detail & Related papers (2024-03-20T15:40:18Z) - Private Knowledge Sharing in Distributed Learning: A Survey [50.51431815732716]
The rise of Artificial Intelligence has revolutionized numerous industries and transformed the way society operates.
It is crucial to utilize information in learning processes that are either distributed or owned by different entities.
Modern data-driven services have been developed to integrate distributed knowledge entities into their outcomes.
arXiv Detail & Related papers (2024-02-08T07:18:23Z) - Emergent Insight of the Cyber Security Management for Saudi Arabian
Universities: A Content Analysis [0.0]
The project is designed to assess the cybersecurity management and policies in Saudi Arabian universities.
The subsequent recommendations can be adopted to enhance the security of IT systems.
arXiv Detail & Related papers (2021-10-09T10:48:30Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.