Emerging Technology and Policy Co-Design Considerations for the Safe and
Transparent Use of Small Unmanned Aerial Systems
- URL: http://arxiv.org/abs/2212.02795v1
- Date: Tue, 6 Dec 2022 07:17:46 GMT
- Title: Emerging Technology and Policy Co-Design Considerations for the Safe and
Transparent Use of Small Unmanned Aerial Systems
- Authors: Ritwik Gupta, Alexander Bayen, Sarah Rohrschneider, Adrienne Fulk,
Andrew Reddie, Sanjit A. Seshia, Shankar Sastry, Janet Napolitano
- Abstract summary: The rapid technological growth observed in the sUAS sector has left gaps in policies and regulations to provide for a safe and trusted environment in which to operate these devices.
From human factors to autonomy, we recommend a series of steps that can be taken by partners in the academic, commercial, and government sectors to reduce policy gaps introduced in the wake of the growth of the sUAS industry.
- Score: 55.60330679737718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid technological growth observed in the sUAS sector over the past
decade has been unprecedented and has left gaps in policies and regulations to
adequately provide for a safe and trusted environment in which to operate these
devices. The Center for Security in Politics at UC Berkeley, via a two-day
workshop, analyzed these gaps by addressing the entire sUAS vertical. From
human factors to autonomy, we recommend a series of steps that can be taken by
partners in the academic, commercial, and government sectors to reduce policy
gaps introduced in the wake of the growth of the sUAS industry.
Related papers
- Cybersecurity in Industry 5.0: Open Challenges and Future Directions [1.6385815610837167]
Unlocking the potential of Industry 5.0 hinges on robust cybersecurity measures.
This paper analyses potential threats and corresponding countermeasures.
It highlights the necessity of developing a new framework centred on cybersecurity to facilitate organisations' secure adoption of Industry 5.0 principles.
arXiv Detail & Related papers (2024-10-12T13:56:17Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Counter-terrorism in cyber-physical spaces: Best practices and
technologies from the state of the art [3.072386223958412]
The demand for protection and security of physical spaces and urban areas increased with the escalation of terroristic attacks in recent years.
We envision with the proposed cyber-physical systems and spaces, a city that would indeed become a smarter urbanistic object, proactively providing alerts and being protective against any threat.
arXiv Detail & Related papers (2023-11-28T18:06:30Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Socio-Technical Security Modelling: Analysis of State-of-the-Art,
Application, and Maturity in Critical Industrial Infrastructure
Environments/Domains [0.0]
This study explores the state-of-the-art, application, and maturity of socio-technical security models for industries and sectors dependent on CI.
arXiv Detail & Related papers (2023-05-09T00:34:12Z) - Security and Safety Aspects of AI in Industry Applications [0.0]
We summarise issues in the domains of safety and security in machine learning that will affect industry sectors in the next five to ten years.
Reports of underlying problems in both safety and security related domains, for instance adversarial attacks have unsettled early adopters.
The problem for real-world applicability lies in being able to assess the risk of applying these technologies.
arXiv Detail & Related papers (2022-07-16T16:41:00Z) - Voluntary safety commitments provide an escape from over-regulation in
AI development [8.131948859165432]
This work reveals for the first time how voluntary commitments, with sanctions either by peers or an institution, leads to socially beneficial outcomes.
Results are directly relevant for the design of governance and regulatory policies that aim to ensure an ethical and responsible AI technology development process.
arXiv Detail & Related papers (2021-04-08T12:54:56Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.