Exploring Public Opinion on Responsible AI Through The Lens of Cultural
Consensus Theory
- URL: http://arxiv.org/abs/2402.00029v1
- Date: Sat, 6 Jan 2024 20:57:35 GMT
- Title: Exploring Public Opinion on Responsible AI Through The Lens of Cultural
Consensus Theory
- Authors: Necdet Gurkan, Jordan W. Suchow
- Abstract summary: We applied Cultural Consensus Theory to a nationally representative survey dataset on various aspects of AI.
Our results offer valuable insights by identifying shared and contrasting views on responsible AI.
- Score: 0.1813006808606333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the societal implications of Artificial Intelligence (AI) continue to
grow, the pursuit of responsible AI necessitates public engagement in its
development and governance processes. This involvement is crucial for capturing
diverse perspectives and promoting equitable practices and outcomes. We applied
Cultural Consensus Theory (CCT) to a nationally representative survey dataset
on various aspects of AI to discern beliefs and attitudes about responsible AI
in the United States. Our results offer valuable insights by identifying shared
and contrasting views on responsible AI. Furthermore, these findings serve as
critical reference points for developers and policymakers, enabling them to
more effectively consider individual variances and group-level cultural
perspectives when making significant decisions and addressing the public's
concerns.
Related papers
- AI Governance and Accountability: An Analysis of Anthropic's Claude [0.0]
This paper examines the AI governance landscape, focusing on Anthropic's Claude, a foundational AI model.
We analyze Claude through the lens of the NIST AI Risk Management Framework and the EU AI Act, identifying potential threats and proposing mitigation strategies.
arXiv Detail & Related papers (2024-05-02T23:37:06Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Culturally Responsive Artificial Intelligence -- Problems, Challenges
and Solutions [0.9065034043031668]
This paper explores the socio-cultural and ethical challenges stemming from the implementation of AI algorithms.
It highlights the necessity for their culturally responsive development.
It also advocates the significance of AI enculturation and underlines the importance of regulatory measures to promote cultural responsibility in AI systems.
arXiv Detail & Related papers (2023-12-13T19:09:45Z) - Survey on AI Ethics: A Socio-technical Perspective [0.9374652839580183]
Ethical concerns associated with AI are multifaceted, including challenging issues of fairness, privacy and data protection, responsibility and accountability, safety and robustness, transparency and explainability, and environmental impact.
This work unifies the current and future ethical concerns of deploying AI into society.
arXiv Detail & Related papers (2023-11-28T21:00:56Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - How Different Groups Prioritize Ethical Values for Responsible AI [75.40051547428592]
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible AI technologies.
While their recommendations converge on a set of central values, little is known about the values a more representative public would find important for the AI technologies they interact with and might be affected by.
We conducted a survey examining how individuals perceive and prioritize responsible AI values across three groups.
arXiv Detail & Related papers (2022-05-16T14:39:37Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.