Managing Controlled Unclassified Information in Research Institutions
- URL: http://arxiv.org/abs/2211.14886v1
- Date: Sun, 27 Nov 2022 16:54:24 GMT
- Title: Managing Controlled Unclassified Information in Research Institutions
- Authors: Baijian Yang, Carolyn Ellis, Preston Smith, Huyunting Huang
- Abstract summary: This work explains the concept of Controlled Unclassified Information (CUI) and the challenges it brings to the research institutions.
A managed research ecosystem is introduced in this work.
- Score: 1.7778609937758323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In order to operate in a regulated world, researchers need to ensure
compliance with ever-evolving landscape of information security regulations and
best practices. This work explains the concept of Controlled Unclassified
Information (CUI) and the challenges it brings to the research institutions.
Survey from the user perceptions showed that most researchers and IT
administrators lack a good understanding of CUI and how it is related to other
regulations, such as HIPAA, ITAR, GLBA, and FERPA. A managed research ecosystem
is introduced in this work. The workflow of this efficient and cost effective
framework is elaborated to demonstrate how controlled research data are
processed to be compliant with one of the highest level of cybersecurity in a
campus environment. Issues beyond the framework itself is also discussed. The
framework serves as a reference model for other institutions to support CUI
research. The awareness and training program developed from this work will be
shared with other institutions to build a bigger CUI ecosystem.
Related papers
- Auditing of AI: Legal, Ethical and Technical Approaches [0.0]
AI auditing is a rapidly growing field of research and practice.
Different approaches to AI auditing have different affordances and constraints.
The next step in the evolution of auditing as an AI governance mechanism should be the interlinking of these available approaches.
arXiv Detail & Related papers (2024-07-07T12:49:58Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Blockchain for Academic Integrity: Developing the Blockchain Academic Credential Interoperability Protocol (BACIP) [0.0]
This research introduces the Academic Credential Protocol (BACIP)
BACIP is designed to significantly enhance the security, privacy, and interoperability of verifying academic credentials globally.
Preliminary evaluations suggest that BACIP could enhance verification efficiency and bolster security against tampering and unauthorized access.
arXiv Detail & Related papers (2024-06-17T06:11:51Z) - On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms [56.119374302685934]
There have been severe concerns over the trustworthiness of AI technologies.
Machine and deep learning algorithms depend heavily on the data used during their development.
We propose a framework to evaluate the datasets through a responsible rubric.
arXiv Detail & Related papers (2023-10-24T14:01:53Z) - We Are Not There Yet: The Implications of Insufficient Knowledge
Management for Organisational Compliance [25.30364629335751]
This paper presents the findings of an exploratory qualitative study with data protection officers and other privacy professionals.
We found issues with knowledge management to be the underlying challenge of our participants' feedback.
This paper questions what knowledge management or automation solutions may prove to be effective in establishing better computer-supported work environments.
arXiv Detail & Related papers (2023-05-06T14:19:54Z) - Building a Resilient Cybersecurity Posture: A Framework for Leveraging
Prevent, Detect and Respond Functions and Law Enforcement Collaboration [0.0]
This research paper compares and contrasts the CyRLEC Framework with the NIST Cybersecurity Framework.
The CyRLEC Framework takes a broader view of cybersecurity, including proactive prevention, early detection, rapid response to cyber-attacks, and close collaboration with law enforcement agencies.
arXiv Detail & Related papers (2023-03-20T05:16:54Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z) - Emergent Insight of the Cyber Security Management for Saudi Arabian
Universities: A Content Analysis [0.0]
The project is designed to assess the cybersecurity management and policies in Saudi Arabian universities.
The subsequent recommendations can be adopted to enhance the security of IT systems.
arXiv Detail & Related papers (2021-10-09T10:48:30Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.