Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI
- URL: http://arxiv.org/abs/2501.10389v1
- Date: Thu, 19 Dec 2024 17:40:58 GMT
- Title: Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI
- Authors: Lakshika Vaishnav, Sakshi Singh, Kimberly A. Cornell,
- Abstract summary: As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.
Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.
Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
- Score: 0.0
- License:
- Abstract: This paper investigates the impacts of the rapidly evolving landscape of generative Artificial Intelligence (AI) development. Emphasis is given to how organizations grapple with a critical imperative: reevaluating their policies regarding AI usage in the workplace. As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency. Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny. These lesser-known models have received limited scholarly analysis and may lack comprehensive restrictions and safeguards. Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship. Generative AI chatbots are increasingly prevalent, but cybersecurity risks have also escalated. Organizations must carefully navigate this evolving landscape by implementing transparent AI usage policies. Frequent training and policy updates are essential to adapt to emerging threats. Insider threats, whether malicious or unwitting, continue to pose one of the most significant cybersecurity challenges in the workplace. Our research is on the lesser-known publicly accessible LLMs and their implications for workplace policies. We contribute to the ongoing discourse on AI ethics, transparency, and security by emphasizing the need for well-thought-out guidelines and vigilance in policy maintenance.
Related papers
- Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity [0.0]
This paper critically examines the evolving ethical and regulatory challenges posed by the integration of artificial intelligence in cybersecurity.
We trace the historical development of AI regulation, highlighting major milestones from theoretical discussions in the 1940s to the implementation of recent global frameworks such as the European Union AI Act.
Ethical concerns such as bias, transparency, accountability, privacy, and human oversight are explored in depth, along with their implications for AI-driven cybersecurity systems.
arXiv Detail & Related papers (2025-01-15T18:17:37Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Considerations Influencing Offense-Defense Dynamics From Artificial Intelligence [0.0]
AI can enhance defensive capabilities but also presents avenues for malicious exploitation and large-scale societal harm.
This paper proposes a taxonomy to map and examine the key factors that influence whether AI systems predominantly pose threats or offer protective benefits to society.
arXiv Detail & Related papers (2024-12-05T10:05:53Z) - Artificial Intelligence in Cybersecurity: Building Resilient Cyber Diplomacy Frameworks [0.0]
This paper explores how automation and artificial intelligence (AI) are transforming U.S. cyber diplomacy.
Leveraging these technologies helps the U.S. manage the complexity and urgency of cyber diplomacy.
arXiv Detail & Related papers (2024-11-17T17:57:17Z) - Assessing Privacy Policies with AI: Ethical, Legal, and Technical Challenges [6.916147439085307]
Large Language Models (LLMs) can be used to assess privacy policies for users automatically.
We explore the challenges of this approach in three pillars, namely technical feasibility, ethical implications, and legal compatibility.
Our findings aim to identify potential for future research, and to foster a discussion on the use of LLM technologies.
arXiv Detail & Related papers (2024-10-10T21:36:35Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.