Regulating trusted autonomous systems in Australia
- URL: http://arxiv.org/abs/2302.03778v1
- Date: Tue, 7 Feb 2023 22:26:17 GMT
- Title: Regulating trusted autonomous systems in Australia
- Authors: Rachel Horne, Tom Putland, Mark Brady
- Abstract summary: Australia is a leader in autonomous systems technology, particularly in the mining industry.
The paper will identify the growing use cases for autonomous systems in Australia, in the maritime, air and land domains.
It argues that Australia's regulatory approach needs to become more agile and anticipatory.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Australia is a leader in autonomous systems technology, particularly in the
mining industry, borne from necessity in a geographically dispersed and complex
natural environment. Increasingly advanced autonomous systems are becoming more
prevalent in Australia, particularly as the safety, environmental and
efficiency benefits become better understood, and the increasing sophistication
of technology improves capability and availability. Increasing use of these
systems, including in the maritime domain and air domain, is placing pressure
on the national safety regulators, who must either continue to apply their
traditional regulatory approach requiring exemptions to enable operation of
emerging technology, or seize the opportunity to put in place an agile and
adaptive approach better suited to the rapid developments of the twenty first
century. In Australia the key national safety regulators have demonstrated an
appetite for working with industry to facilitate innovation, but their limited
resources mean progress is slow. There is a critical role to be played by third
parties from industry, government, and academia who can work together to
develop, test and publish new assurance and accreditation frameworks for
trusted autonomous systems, and assist in the transition to an adaptive and
agile regulatory philosophy. This is necessary to ensure the benefits of
autonomous systems can be realised, without compromising safety. This paper
will identify the growing use cases for autonomous systems in Australia, in the
maritime, air and land domains, assess the current regulatory framework, argue
that Australia's regulatory approach needs to become more agile and
anticipatory, and investigate how third party projects could positively impact
the assurance and accreditation process for autonomous systems in the future.
Related papers
- In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - From Principles to Rules: A Regulatory Approach for Frontier AI [2.1764247401772705]
Regulators may require frontier AI developers to adopt safety measures.
The requirements could be formulated as high-level principles or specific rules.
These regulatory approaches, known as 'principle-based' and 'rule-based' regulation, have complementary strengths and weaknesses.
arXiv Detail & Related papers (2024-07-10T01:45:15Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Future Vision of Dynamic Certification Schemes for Autonomous Systems [3.151005833357807]
We identify several issues with the current certification strategies that could pose serious safety risks.
We highlight the inadequate reflection of software changes in constantly evolving systems and the lack of support for systems' cooperation.
Other shortcomings include the narrow focus of awarded certification, neglecting aspects such as the ethical behavior of autonomous software systems.
arXiv Detail & Related papers (2023-08-20T19:06:57Z) - Dual Governance: The intersection of centralized regulation and
crowdsourced safety mechanisms for Generative AI [1.2691047660244335]
Generative Artificial Intelligence (AI) has seen mainstream adoption lately, especially in the form of consumer-facing, open-ended, text and image generating models.
The potential for generative AI to displace human creativity and livelihoods has also been under intense scrutiny.
Existing and proposed centralized regulations by governments to rein in AI face criticisms such as not having sufficient clarity or uniformity.
Decentralized protections via crowdsourced safety tools and mechanisms are a potential alternative.
arXiv Detail & Related papers (2023-08-02T23:25:21Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - Assurance for Autonomy -- JPL's past research, lessons learned, and
future directions [56.32768279109502]
Autonomy is required when a wide variation in circumstances precludes responses being pre-planned.
Mission assurance is a key contributor to providing confidence, yet assurance practices honed over decades of spaceflight have relatively little experience with autonomy.
Researchers in JPL's software assurance group have been involved in the development of techniques specific to the assurance of autonomy.
arXiv Detail & Related papers (2023-05-16T18:24:12Z) - Rethinking Certification for Higher Trust and Ethical Safeguarding of
Autonomous Systems [6.24907186790431]
We discuss the motivation for the need to modify the current certification processes for autonomous driving systems.
We identify a number of issues with the proposed certification strategies, which may impact the systems substantially.
arXiv Detail & Related papers (2023-03-16T15:19:25Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Emerging Technology and Policy Co-Design Considerations for the Safe and
Transparent Use of Small Unmanned Aerial Systems [55.60330679737718]
The rapid technological growth observed in the sUAS sector has left gaps in policies and regulations to provide for a safe and trusted environment in which to operate these devices.
From human factors to autonomy, we recommend a series of steps that can be taken by partners in the academic, commercial, and government sectors to reduce policy gaps introduced in the wake of the growth of the sUAS industry.
arXiv Detail & Related papers (2022-12-06T07:17:46Z) - Regulating Safety and Security in Autonomous Robotic Systems [0.0]
Rules for autonomous systems are often difficult to formalise.
In the space and nuclear sectors applications are more likely to differ, so a set of general safety principles has developed.
This allows novel applications to be assessed for their safety, but are difficult to formalise.
We are collaborating with regulators and the community in the space and nuclear sectors to develop guidelines for autonomous and robotic systems.
arXiv Detail & Related papers (2020-07-09T16:33:14Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.