A Trust Framework for Government Use of Artificial Intelligence and
Automated Decision Making
- URL: http://arxiv.org/abs/2208.10087v1
- Date: Mon, 22 Aug 2022 06:51:15 GMT
- Title: A Trust Framework for Government Use of Artificial Intelligence and
Automated Decision Making
- Authors: Pia Andrews, Tim de Sousa, Bruce Haefele, Matt Beard, Marcus Wigan,
Abhinav Palia, Kathy Reid, Saket Narayan, Morgan Dumitru, Alex Morrison,
Geoff Mason, Aurelie Jacquet
- Abstract summary: This paper identifies the challenges of the mechanisation, digitisation and automation of public sector systems and processes.
It proposes a modern and practical framework to ensure and assure ethical and high veracity Artificial Intelligence (AI) or Automated Decision Making (ADM) systems in public institutions.
- Score: 0.1527458325979785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper identifies the current challenges of the mechanisation,
digitisation and automation of public sector systems and processes, and
proposes a modern and practical framework to ensure and assure ethical and high
veracity Artificial Intelligence (AI) or Automated Decision Making (ADM)
systems in public institutions. This framework is designed for the specific
context of the public sector, in the jurisdictional and constitutional context
of Australia, but is extendable to other jurisdictions and private sectors. The
goals of the framework are to: 1) earn public trust and grow public confidence
in government systems; 2) to ensure the unique responsibilities and
accountabilities (including to the public) of public institutions under
Administrative Law are met effectively; and 3) to assure a positive human,
societal and ethical impact from the adoption of such systems. The framework
could be extended to assure positive environmental or other impacts, but this
paper focuses on human/societal outcomes and public trust. This paper is meant
to complement principles-based frameworks like Australia's Artificial
Intelligence Ethics Framework and the EU Assessment List for Trustworthy AI. In
many countries, COVID created a bubble of improved trust, a bubble which has
arguably already popped, and in an era of unprecedented mistrust of public
institutions (but even in times of high trust) it is not enough that a service
is faster, or more cost-effective. This paper proposes recommendations for
government systems (technology platforms, operations, culture, governance,
engagement, etc.) that would help to improve public confidence and trust in
public institutions, policies and services, whilst meeting the special
obligations and responsibilities of the public sector.
Related papers
- Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization [0.0]
We show how much of the criticisms directed towards AI systems spring from well known tensions at the heart of Weberian rationalization.
Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible.
It also highlights that AI driven policy optimization comes at the exclusion of other competing political values.
arXiv Detail & Related papers (2024-07-07T11:54:14Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Dual Governance: The intersection of centralized regulation and
crowdsourced safety mechanisms for Generative AI [1.2691047660244335]
Generative Artificial Intelligence (AI) has seen mainstream adoption lately, especially in the form of consumer-facing, open-ended, text and image generating models.
The potential for generative AI to displace human creativity and livelihoods has also been under intense scrutiny.
Existing and proposed centralized regulations by governments to rein in AI face criticisms such as not having sufficient clarity or uniformity.
Decentralized protections via crowdsourced safety tools and mechanisms are a potential alternative.
arXiv Detail & Related papers (2023-08-02T23:25:21Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Fiduciary Responsibility: Facilitating Public Trust in Automated
Decision Making [0.0]
Research and real-world experience indicate that the public lacks trust in automated decision-making systems.
The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility.
This position paper defines and explains the role of fiduciary responsibility within an automated decision-making system.
arXiv Detail & Related papers (2023-01-06T18:19:01Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - The Sanction of Authority: Promoting Public Trust in AI [4.729969944853141]
We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society.
We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective.
arXiv Detail & Related papers (2021-01-22T22:01:30Z) - Security and Privacy Considerations for Machine Learning Models Deployed
in the Government and Public Sector (white paper) [1.438446731705183]
We describe how the inevitable interactions between a user of unknown trustworthiness and the machine learning models, deployed in governments and public sectors, can jeopardize the system.
We propose recommendations and guidelines that can enhance the security and privacy of the provided services.
arXiv Detail & Related papers (2020-10-12T16:05:29Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.